CedarBackup2-2.26.5/0002775000175000017500000000000012642035650015563 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/Changelog0000664000175000017500000010627612642035012017377 0ustar pronovicpronovic00000000000000Version 2.26.5 02 Jan 2016 * Fix or disable a variety of new warnings and suggestions from pylint. Version 2.26.4 11 Aug 2015 * Improvements based on testing in the Debian continuous integration environment. - Make the logging setup process obey the --stack command-line option - Fix logging setup to always create the log file with the proper specified mode - Fix PurgeItemList.removeYoungFiles() so ageInWholeDays can never be negative - Make filesystemtests more portable, with maximum file path always <= 255 bytes Version 2.26.1 04 Aug 2015 * Fix incorrect exception raise without % in util.py, found by accident. * Fix bugs in the ByteQuantity changes from v2.26.0, so comparisons work properly. * Adjust amazons3, capacity and split to use ByteQuantity directly, not bytes field. Version 2.26.0 03 Aug 2015 * Enhance ByteQuantity so it can be built from and compared to simple numeric values. * Improve the way the amazons3 extension deals with byte quantities. - Fix configuration to support quantities like "2.5 GB", as in other extensions - Improve logging using displayBytes(), so displayed quantities are more legible Version 2.25.0 29 Jul 2015 * Fix a variety of minor warnings and suggestions from pylint. * Clean up manpages and add notes about migrating to version 3. * Review user guide, fix broken links, make minor tweaks to wording, etc. * Convert testcase/utiltests.py to use sys.executable rather than relying on "python". * Switch to minimum Python version of 2.7, since it's the last supported Python 2. - Change file headers to indicate minimim version Python 2 (>= 2.7) - Change interpreter checks in test.py, cli.py, span.py and amazons3.py - Update manpages, user guide, comments and other documentation Version 2.24.4 27 Jul 2015 * Fix long-standing bugs with pre- and post-action hooks. - Return status from hook scripts was ignored, so failures weren't exposed - Config supported multiple hooks per action, but only one was ever executed Version 2.24.3 26 Jul 2015 * Move the project from SourceForge to BitBucket. - Revision control is now in Mercurial rather than Subversion - Update README to work better with BitBucket's website format - Update documentation to reflect new BitBucket location - Remove Subversion-specific scripts and update release procedure - Remove obsolete Subversion $Id$ in file headers Version 2.24.2 05 Jan 2015 * Add optional size-limit configuration for amazons3 extension. Version 2.24.1 07 Oct 2014 * Implement a new tool called cback-amazons3-sync. * Add support for missing --diagnostics flag in cback-span script. Version 2.23.3 03 Oct 2014 * Add new extension amazons3 as an optional replacement for the store action. * Update user manual and INSTALL to clarify a few of the dependencies. * Fix encryption unit test that started failing due to my new GPG key. Version 2.22.0 09 May 2013 * Add eject-related kludges to work around observed behavior. * New config option eject_delay, to slow down open/close * Unlock tray with 'eject -i off' to handle potential problems Version 2.21.1 21 Mar 2013 * Apply patches provided by Jan Medlock as Debian bugs. * Fix typo in manpage (showed -s instead of -D) * Support output from latest /usr/bin/split (' vs. `) Version 2.21.0 12 Oct 2011 * Update CREDITS file to consistently credit all contributers. * Minor tweaks based on PyLint analysis (mostly config changes). * Make ISO image unit tests more robust in writersutiltests.py. - Handle failures with unmount (wait 1 second and try again) - Programmatically disable (and re-enable) the GNOME auto-mounter * Implement configurable recursion for collect action. - Update collect.py to handle recursion (patch by Zoran Bosnjak) - Add new configuration item CollectDir.recursionLevel - Update user manual to discuss new functionality Version 2.20.1 19 Oct 2010 * Fix minor formatting issues in manpages, pointed out by Debian lintian. * Changes required to make code compatible with Python 2.7 - StreamHandler no longer accepts strm= argument (closes: #3079930) - Modify logfile os.fdopen() to be explicit about read/write mode - Fix tests that extract a tarfile twice (exposed by new error behavior) Version 2.20.0 07 Jul 2010 * This is a cleanup release with no functional changes. * Switch to minimum Python version of 2.5 (everyone should have it now). - Make cback script more robust in the case of a bad interpreter version - Change file headers, comments, manual, etc. to reference Python 2.5 - Convert to use @staticmethod rather than x = staticmethod(x) - Change interpreter checks in test.py, cli.py and span.py - Remove Python 2.3-compatible versions of util.nullDevice() and util.Pipe * Configure pylint and execute it against the entire codebase. - Fix a variety of minor warnings and suggestions from pylint - Move unit tests into testcase folder to avoid test.py naming conflict * Remove "Translate [x:y] into [a:b]" debug message for uid/gid translation. * Refactor out util.isRunningAsRoot() to replace scattered os.getuid() calls. * Remove boilerplate comments "As with all of the ... " in config code. * Refactor checkUnique() and parseCommaSeparatedString() from config to util. * Add note in manual about intermittent problems with DVD writer soft links. Version 2.19.6 22 May 2010 * Work around strange stderr file descriptor bugs discovered on Cygwin. * Tweak expected results for tests that fail on Cygwin with Python 2.5.x. * Set up command overrides properly so full test suite works on Debian. * Add refresh_media_delay configuration option and related functionality. Version 2.19.5 10 Jan 2010 * Add customization support, so Debian can use wodim and genisoimage. * SF bug #2929447 - fix cback-span to only ask for media when needed * SF bug #2929446 - add retry logic for writes in cback-span Version 2.19.4 16 Aug 2009 * Add support for the Python 2.6 interpreter. - Use hashlib instead of deprecated sha module when available - Use set type rather than deprecated sets.Set when available - Use tarfile.format rather than deprecated tarfile.posix when available - Fix testGenerateTarfile_002() so expectations match Python 2.6 results Version 2.19.3 29 Mar 2009 * Fix minor epydoc typos, mostly in @sort directives. * Removed support for user manual PDF format (see doc/pdf). Version 2.19.2 08 Dec 2008 * Fix cback-span problem when writing store indicators. Version 2.19.1 15 Nov 2008 * Fix bug when logging strange filenames. Version 2.19.0 05 Oct 2008 * Fix a few typos in the CREDITS file. * Update README to properly reference SourceForge site. * Add option to peer configuration. Version 2.18.0 05 May 2008 * Add the ability to dereference links when following them. - Add util.dereferenceLink() function - Add dereference flag to FilesystemList.addDirContents() - Add CollectDir.dereference attribute - Modify collect action to obey CollectDir.dereference - Update user manual to discuss new attribute Version 2.17.1 26 Apr 2008 * Updated copyright statement slightly. * Updated user manual. - Brought copyright notices up-to-date - Fixed various URLs that didn't reference SourceForge * Fixed problem with link_depth (closes: #1930729). - Can't add links directly, they're implicitly added later by tar - Changed FilesystemList to use includePath=false for recursive links Version 2.17.0 20 Mar 2008 * Change suggested execution index for Capacity extension in manual. * Provide support for application-wide diagnostic reporting. - Add util.Diagnostics class to encapsulate information - Log diagnostics when Cedar Backup first starts - Print diagnostics when running unit tests - Add a new --diagnostics command-line option * Clean up filesystem code that deals with file age, and improve unit tests. - Some platforms apparently cannot set file ages precisely - Change calculateFileAge() to use floats throughout, which is safer - Change removeYoungFiles() to explicitly check on whole days - Put a 1-second fudge factor into unit tests when setting file ages * Fix some unit test failures discovered on Windows XP. - Fix utiltests.TestFunctions.testNullDevice_001() - Fix filesystemtests.TestBackupFileList.testGenerateFitted_004() - Fix typo in filesystemtests.TestFilesystemList.testRemoveLinks_002() Version 2.16.0 18 Mar 2008 * Make name attribute optional in RemotePeer constructor. * Add support for collecting soft links (closes: #1854631). - Add linkDepth parameter to FilesystemList.addDirContents() - Add CollectDir.linkDepth attribute - Modify collect action to obey CollectDir.linkDepth - Update user manual to discuss new attribute - Document "link farm" option for collect configuration * Implement a capacity-checking extension (closes: #1915496). - Add new extension in CedarBackup2/extend/capacity.py - Refactor ByteQuantity out of split.py and into config.py - Add total capacity and utilization to MediaCapacity classes - Update user manual to discuss new extension Version 2.15.3 16 Mar 2008 * Fix testEncodePath_009() to be aware of "UTF-8" encoding. * Fix typos in the PostgreSQL extension section of the manual. * Improve logging when stage action fails (closes: #1854635). * Fix stage action so it works for local users (closes: #1854634). Version 2.15.2 07 Feb 2008 * Updated copyright statements now that code changed in year 2008. * Fix two unit test failures when using Python 2.5 (SF #1861878). - Add new function testtutil.hexFloatLiteralAllowed() - Fix splittests.TestByteQuantity.testConstructor_004() for 0xAC - Fix configtests.TestBlankBehavior.testConstructor_006() for 0xAC Version 2.15.1 19 Dec 2007 * Improve error reporting for managed client action failures. * Make sure that managed client failure does not kill entire backup. * Add appendix "Securing Password-less SSH Connection" to user manual. Version 2.15.0 18 Dec 2007 * Minor documentation tweaks discovered during 3.0 development. * Add support for a new managed backup feature. - Add a new configuration section (PeersConfig) - Change peers configuration in to just override - Modify stage process to take peers list from peers section (if available) - Add new configuration in options and remote peers to support remote shells - Update user manual to discuss managed backup concept and configuration - Add executeRemoteCommand() and executeManagedAction() on peer.RemotePeer Version 2.14.0 19 Sep 2007 * Deal properly with programs that localize their output. - Create new util.sanitizeEnvironment() function to set $LANG=C - Call new sanitizeEnvironment() function inside util.executeCommand() - Change extend/split._splitFile() to be more verbose about problems - Update Extension Architecture Interface to mandate $LANG=C - Add split unit tests to catch any locale-related regressions - Thanks to Lukasz Nowak for initial debugging in split extension Version 2.13.2 10 Jul 2007 * Tweak some docstring markup to work with Epydoc beta 1. * Apply documentation patch from Lukasz K. Nowak. - Document that mysql extension can back up remote databases - Fix typos in extend/sysinfo.py * Clean up some configuration error messages to be clearer. - Make sure that reported errors always include enough information - Add a prefix argument to some of the specialized lists in util.py * Catch invalid regular expressions in config and filesystem code. - Add new util.RegexList list to contain only valid regexes - Use RegexList in config.ConfigDir and config.CollectConfig - Use RegexList in subversion.RepositoryDir and mbox.MboxDir - Throw ValueError on bad regex in FilesystemList remove() methods - Use RegexList in FilesystemList for all lists of patterns Version 2.13.1 29 Mar 2007 * Fix ongoing problems re-initializing previously-written DVDs - Even with -Z, growisofs sometimes wouldn't overwrite DVDs - It turns out that this ONLY happens from cron, not from a terminal - The solution is to use the undocumented option -use-the-force-luke=tty - Also corrected dvdwriter to use option "-dry-run" not "--dry-run" Version 2.13.0 25 Mar 2007 * Change writeIndicator() to raise exception on failure (closes #53). * Change buildNormalizedPath() for leading "." so files won't be hidden * Remove bogus usage of tempfile.NamedTemporaryFile in remote peer. * Refactored some common action code into CedarBackup2.actions.util. * Add unit tests for a variety of basic utility functions (closes: #45). - Error-handling was improved in some utility methods - Fundamentally, behavior should be unchanged * Reimplement DVD capacity calculation (initial code from Dmitry Rutsky). - This is now done using a growisofs dry run, without -Z - The old dvd+rw-mediainfo method was unreliable on some systems - Error-handling behavior on CdWriter was also tweaked for consistency * Add code to check media before writing to it (closes: #5). - Create new check_media store configuration option - Implement new initialize action to initialize rewritable media - Media is initialized by writing an initial session with media label - The store action now always writes a media label as well - Update user manual to discuss the new behavior - Add unit tests for new configuration * Implement an optimized media blanking strategy (closes: #48). - When used, Cedar Backup will only blank media when it runs out of space - Initial implementation and manual text provided by Dmitry Rutsky - Add new blanking_behavior store configuration options - Update user manual to document options and discuss usage - Add unit tests for new configuration Version 2.12.1 26 Feb 2007 * Fix typo in new split section in the user manual. * Fix incorrect call to new writeIndicatorFile() function in stage action. * Add notes in manual on how to find gpg and split commands. Version 2.12.0 23 Feb 2007 * Fix some encrypt unit tests related to config validation * Make util.PathResolverSingleton a new-style class (i.e. inherit from object) * Modify util.changeOwnership() to be a no-op for None user or group * Created new split extension to split large staged files. - Refactored common action utility code into actions/util.py. - Update standard actions, cback-span, and encrypt to use refactored code - Updated user manual to document the new extension and restore process. Version 2.11.0 21 Feb 2007 * Fix log message about SCSI id in writers/dvdwriter.py. * Remove TODO from public distribution (use Bugzilla instead). * Minor changes to mbox functionality (refactoring, test cleanup). * Fix bug in knapsack implementation, masked by poor test suite. * Fix filesystem unit tests that had typos in them and wouldn't work * Reorg user manual to move command-line tools to own chapter (closes: #33) * Add validation for duplicate peer and extension names (closes: #37, #38). * Implement new cback-span command-line tool (closes: #51). - Create new util/cback-span script and CedarBackup2.tools package - Implement guts of script in CedarBackup2/tools/span.py - Add new BackupFileList.generateSpan() method and tests - Refactor other util and filesystem code to make things work - Add new section in user manual to discuss new command * Rework validation requiring least one item to collect (closes: #34). - This is no longer a validation error at the configuration level - Instead, the collect action itself will enforce the rule when it is run * Support a flag in store configuration (closes: #39). - Change StoreConfig, CdWriter and DvdWriter to accept new flag - Update user manual to document new flag, along with warnings about it * Support repository directories in Subversion extension (closes: #46). - Add configuration modeled after - Make configuration value optional and for reference only - Refactor code and deprecate BDBRepository and FSFSRepository - Update user manual to reflect new functionality Version 2.10.1 30 Jan 2007 * Fix a few places that still referred only to CD/CD-RW. * Fix typo in definition of actions.constants.DIGEST_EXTENSION. Version 2.10.0 30 Jan 2007 * Add support for DVD writers and DVD+R/DVD+RW media. - Create new writers.dvdwriter module and DvdWriter class - Support 'dvdwriter' device type, and 'dvd+r' and 'dvd+rw' media types - Rework user manual to properly discuss both CDs and DVDs * Support encrypted staging directories (closes: #33). - Create new 'encrypt' extension and associated unit tests - Document new extension in user manual * Support new action ordering mechanism for extensions. - Extensions can now specify dependencies rather than indexes - Rewrote cli._ActionSet class to use DirectedGraph for dependencies - This functionality is not yet "official"; that will happen later * Refactor and clean up code that implements standard actions. - Split action.py into various other files in the actions package - Move a few of the more generic utility functions into util.py - Preserve public interface via imports in otherwise empty action.py - Change various files to import from the new module locations * Revise and simplify the implied "image writer" interface in CdWriter. - Add the new initializeImage() and addImageEntry() methods - Interface is now initializeImage(), addImageEntry() and writeImage() - Rework actions.store.writeImage() to use new writer interface * Refactor CD writer functionality and clean up code. - Create new writers package to hold all image writers - Move image.py into writers/util.py package - Move most of writer.py into writers/cdwriter.py - Move writer.py validate functions into writers/util.py - Move writertests.py into cdwritertests.py - Move imagetests.py into writersutiltests.py - Preserve public interface via imports in otherwise empty files - Change various files to import from the new module locations * More general code cleanup and minor enhancements. - Modify util/test.py to accept named tests on command line - Fix rebuild action to look at store config instead of stage. - Clean up xmlutil imports in mbox and subversion extensions - Copy Mac OS X (darwin) errors from store action into rebuild action - Check arguments to validateScsiId better (no None path allowed now) - Rename variables in config.py to be more consistent with each other - Add new excludeBasenamePatterns flag to FilesystemList - Add new addSelf flag to FilesystemList.addDirContents() - Create new RegexMatchList class in util.py, and add tests - Create new DirectedGraph class in util.py, and add tests - Create new sortDict() function in util.py, and add tests * Create unit tests for functionality that was not explictly tested before. - ActionHook, PreActionHook, PostActionHook, CommandOverride (config.py) - AbsolutePathList, ObjectTypeList, RestrictedContentList (util.py) Version 2.9.0 18 Dec 2006 * Change mbox extension to use ISO-8601 date format when calling grepmail. * Fix error-handling in generateTarfile() when target dir is missing. * Tweak pycheckrc to find fewer expected errors (from standard library). * Fix Debian bug #403546 by supporting more CD writer configurations. - Be looser with SCSI "methods" allowed in valid SCSI id (update regex) - Make config section's parameter optional - Change CdWriter to support "hardware id" as either SCSI id or device - Implement cdrecord commands in terms of hardware id instead of SCSI id - Add documentation in writer.py to discuss how we talk to hardware - Rework user manual's discussion of how to configure SCSI devices * Update Cedar Backup user manual. - Re-order setup procedures to modify cron at end (Debian #403662) - Fix minor typos and misspellings (Debian #403448 among others) - Add discussion about proper ordering of extension actions Version 2.8.1 04 Sep 2006 * Changes to fix, update and properly build Cedar Backup manual - Change DocBook XSL configuration to use "current" stylesheet - Tweak manual-generation rules to work around XSL toolchain issues - Document where to find grepmail utility in Appendix B - Create missing documentation for mbox exclusions configuration - Bumped copyright dates to show "(c) 2005-2006" where needed - Made minor changes to some sections based on proofreading Version 2.8.0 24 Jun 2006 * Remove outdated comment in xmlutil.py about dependency on PyXML. * Tweak wording in doc/docbook.txt to make it clearer. * Consistently rework "project description" everywhere. * Fix some simple typos in various comments and documentation. * Added recursive flag (default True) to FilesystemList.addDirContents(). * Added flat flag (default False) to BackupFileList.generateTarfile(). * Created mbox extension in CedarBackup2.extend.mbox (closes: #31). - Updated user manual to document the new extension and restore process. * Added PostgreSQL extension in CedarBackup2.extend.postgresql (closes: #32). - This code was contributed by user Antoine Beaupre ("The Anarcat"). - I tweaked it slightly, added configuration tests, and updated the manual. - I have no PostgreSQL databases on which to test the functionality. * Made most unit tests run properly on Windows platform, just for fun. * Re-implement Pipe class (under executeCommand) for Python 2.4+ - After Python 2.4, cross-platform subprocess.Popen class is available - Added some new regression tests for executeCommand to stress new Pipe * Switch to newer version of Docbook XSL stylesheet (1.68.1) - The old stylesheet isn't easily available any more (gone from sf.net) - Unfortunately, the PDF output changed somewhat with the new version * Add support for collecting individual files (closes: #30). - Create new config.CollectFile class for use by other classes - Update config.CollectConfig class to contain a list of collect files - Update config.Config class to parse and emit collect file data - Modified collect process in action.py to handle collect files - Updated user manual to discuss new configuraton Version 2.7.2 22 Dec 2005 * Remove some bogus writer tests that depended on an arbitrary SCSI device. Version 2.7.1 13 Dec 2005 * Tweak the CREDITS file to fix a few typos. * Remove completed tasks in TODO file and reorganize it slightly. * Get rid of sys.exit() calls in util/test.py in favor of simple returns. * Fix implementation of BackupFileList.removeUnchanged(captureDigest=True). - Since version 2.7.0, digest only included backed-up (unchanged) files - This release fixes code so digest is captured for all files in the list - Fixed captureDigest test cases, which were testing for wrong results * Make some more updates to the user manual based on further proof-reading. - Rework description of "midnight boundary" warning slightly in basic.xml - Change "Which Linux Distribution?" to "Which Platform?" in config.xml - Fix a few typos and misspellings in basic.xml Version 2.7.0 30 Oct 2005 * Cleanup some maintainer-only (non-distributed) Makefile rules. * Make changes to standardize file headers with other Cedar Solutions code. * Add debug statements to filesystem code (huge increase in debug log size). * Standardize some config variable names ("parentNode" instead of "parent"). * Fix util/test.py to return proper (non-zero) return status upon failure. * No longer attempt to change ownership of files when not running as root. * Remove regression test for bug #25 (testAddFile_036) 'cause it's not portable. * Modify use of user/password in MySQL extension (suggested by Matthias Urlichs). - Make user and password values optional in Cedar Backup configuration - Add a few regression tests to make sure configuration changes work - Add warning when user or password value(s) are visible in process listing - Document use of /root/.my.cnf or ~/.my.cnf in source code and user manual - Rework discussion of command line, file permissions, etc. in user manual * Optimize incremental backup, and hopefully speed it up a bit (closes: #29). - Change BackupFileList.removeUnchanged() to accept a captureDigest flag - This avoids need to call both generateDigestMap() and removeUnchanged() - Note that interface to removeUnchanged was modified, but not broken * Add support for pre- and post-action command hooks (closes: #27). - Added and sections within - Updated user manual documentation for options configuration section - Create new config.PreActionHook and PostActionHook classes to hold hooks - Added new hooks list field on config.OptionsConfig class - Update ActionSet and ActionItem in cli to handle and execute hooks * Rework and abstract XML functionality, plus remove dependency on PyXML. - Refactor general XML utility code out of config.py into xmlutil.py - Create new isElement() function to eliminate need for Node references - Create new createInputDom(), createOutputDom() and serializeDom() functions - Use minidom XML parser rather than PyExpat.reader (much faster) - Hack together xmlutil.Serializer based on xml.dom.ext.PrettyPrint - Remove references to PyXML in manual's depends.xml and install.xml files - Add notes about PyXML code sourced from Fourthought, Inc. in CREDITS - Rework mysql and subversion unit tests in terms of new functions Version 2.6.1 27 Sep 2005 * Fix broken call to node.hasChildNodes (no parens) in config.py. * Make "pre-existing collect indicator" error more obvious (closes: #26). * Avoid failures for UTF-8 filenames on certain filesystems (closes: #25). * Fix FilesystemList to encode excludeList items, preventing UTF-8 failures. Version 2.6.0 12 Sep 2005 * Remove bogus check for remote collect directory on master (closes: #18). * Fix testEncodePath_009 test failure on UTF-8 filesystems (closes: #19). * Fixed several unit tests related to the CollectConfig class (all typos). * Fix filesystem and action code to properly handle path "/" (closes: #24). * Add extension configuration to cback.conf.sample, to clarify things. * Place starting and ending revision numbers into Subversion dump filenames. * Implement resolver mechanism to support paths to commands (closes: #22). - Added section within configuration - Create new config.CommandOverride class to hold overrides - Added new overrides field on config.OptionsConfig class - Create util.PathResolverSingleton class to encapsulate mappings - Create util.resolveCommand convenience function for code to call - Create and call new _setupPathResolver() function in cli code - Change all _CMD constants to _COMMAND, for consistency * Change Subversion extension to support "fsfs" repositories (closes: #20). - Accept "FSFS" repository in configuration section - Create new FSFSRepository class to represent an FSFS repository - Refactor internal code common to both BDB and FSFS repositories - Add and rework test cases to provide coverage of FSFSRepository * Port to Darwin (Mac OS X) and ensure that all regression tests pass. - Don't run testAddDirContents_072() for Darwin (tarball's invalid there) - Write new ISO mount testing methods in terms of Apple's "hdiutil" utility - Accept Darwin-style SCSI writer devices, i.e. "IOCompactDiscServices" - Tweak existing SCSI id pattern to allow spaces in a few other places - Add new regression tests for validateScsiId() utility function - Add code warnings and documentation in manual and in doc/osx * Update, clean up and extend Cedar Backup User Manual (closes: #21). - Work through document and copy-edit it now that it's matured - Add documentation for new options and subversion config items - Exorcise references to Linux which assumed it was "the" platform - Add platform-specific notes for non-Linux platforms (darwin, BSDs) - Clarify purpose of the 'collect' action on the master - Clarify how actions (i.e. 'store') are optional - Clarify that 'all' does not execute extensions - Add an appendix on restoring backups Version 2.5.0 12 Jul 2005 * Update docs to modify use of "secure" (suggested by Lars Wirzenius). * Removed "Not an Official Debian Package" section in software manual. * Reworked Debian install procedure in manual to reference official packages. * Fix manual's build process to create files with mode 664 rather than 755. * Deal better with date boundaries on the store operation (closes: #17). - Add value in configuration - Add warnMidnite field to the StoreConfig object - Add warning in store process for crossing midnite boundary - Change store --full to have more consistent behavior - Update manual to document changes related to this bug Version 2.4.2 23 Apr 2005 * Fix boundaries log message again, properly this time. * Fix a few other log messages that used "," rather than "%". Version 2.4.1 22 Apr 2005 * Fix minor typos in user manual and source code documentation. * Properly annotate code implemented based on Python 2.3 source. * Add info within CREDITS about Python 2.3 and Docbook XSL licenses. * Fix logging for boundaries values (can't print None[0], duh). Version 2.4.0 02 Apr 2005 * Re-license manual under "GPL with clarifications" to satisfy DFSG. * Rework our unmount solution again to try and fix observed problems. - Sometimes, unmount seems to "work" but leaves things mounted. - This might be because some file is not yet completely closed. - We try to work around this by making repeated unmount attempts. - This logic is now encapsulated in util.mount() and util.unmount(). - This solution should also be more portable to non-Linux systems. Version 2.3.1 23 Mar 2005 * Attempt to deal more gracefully with corrupted media. * Unmount media using -l ("lazy unmount") in consistency check. * Be more verbose about media errors during consistency check. Version 2.3.0 10 Mar 2005 * Make 'extend' package public by listing it in CedarBackup2/__init__.py. * Reimplement digest generation to use incremental method (now ~3x faster). * Tweak manifest to be a little more selective about what's distributed. Version 2.2.0 09 Mar 2005 * Fix bug related to execution of commands with huge output. * Create custom class util.Pipe, inheriting from popen2.Popen4. * Re-implement util.executeCommand() in terms of util.Pipe. * Change ownership of sysinfo files to backup user/group after write. Version 2.1.3 08 Mar 2005 * In sysinfo extension, explicitly path to /sbin/fdisk command. * Modify behavior and logging when optional sysinfo commands are not found. * Add extra logging around boundaries and capacity calculations in writer.py. * In executeCommand, log command using output logger as well as debug level. * Docs now suggest --output in cron command line to aid problem diagnosis. * Fix bug in capacity calculation, this time for media with a single session. * Validate all capacity code against v1.0 code, making changes as needed. * Re-evaluate all capacity-related regression tests against v1.0 code. * Add new regression tests for capacity bugs which weren't already detected. Version 2.1.2 07 Mar 2005 * Fix a few extension error messages with incorrect (missing) arguments. * In sysinfo extension, do not log ls and dpkg output to the debug log. * Fix CdWriter, which reported negative capacity when disc was almost full. * Make displayBytes deal properly with negative values via math.fabs(). * Change displayBytes to default to 2 digits after the decimal point. Version 2.1.1 06 Mar 2005 * Fix bug in setup.py (need to install extensions properly). Version 2.1.0 06 Mar 2005 * Fixed doc/cback.1 .TH line to give proper manpage section. * Updated README to more completely describe what Cedar Backup is. * Fix a few logging statements for the collect action, to be clearer. * Fix regression tests that failed in a Debian pbuilder environment. * Add simple main routine to cli.py, so executing it is the same as cback. * Added optional outputFile and doNotLog parameters to util.executeCommand(). * Display byte quantities in sensible units (i.e. bytes, kB, MB) when logged. * Refactored private code into public in action.py and config.py. * Created MySQL extension in CedarBackup2.extend.mysql. * Created sysinfo extension in CedarBackup2.extend.sysinfo. * Created Subversion extension in CedarBackup2.extend.subversion. * Added regression tests as needed for new extension functionality. * Added Chapter 5, Official Extensions in the user manual. Version 2.0.0 26 Feb 2005 * Complete ground-up rewrite for 2.0.0 release. * See doc/release.txt for more details about changes. Version 1.13 25 Jan 2005 * Fix boundaries calculation when using kernel >= 2.6.8 (closes: #16). * Look for a matching boundaries pattern among all lines, not just the first. Version 1.12 16 Jan 2005 * Add support for ATAPI devices, just like ATA (closes: #15). * SCSI id can now be in the form '[ATA:|ATAPI:]scsibus,target,lun'. Version 1.11 17 Oct 2004 * Add experimental support for new Linux 2.6 ATA CD devices. * SCSI id can now be in the form '[ATA:]scsibus,target,lun'. * Internally, the SCSI id is now stored as a string, not a list. * Cleaned up 'cdrecord' calls in cdr.py to make them consistent. * Fixed a pile of warnings noticed by the latest pychecker. Version 1.10 01 Dec 2003 * Removed extraneous error parameter from cback's version() function. * Changed copyright statement and year; added COPYRIGHT in release.py. * Reworked all file headers to match new Cedar Solutions standard. * Removed __version__ and __date__ values with switch to Subversion. * Convert to tabs in Changelog to make the Vim syntax file happy. * Be more stringent in validating contents of SCSI triplet values. * Fixed bug when using modulo 1 (% 1) in a few places. * Fixed shell-interpolation bug discovered by Rick Low (security hole). * Replace all os.popen() calls with new execute_command() call for safety. Version 1.9 09 Nov 2002 * Packaging changes to allow Debian version to be "normal", not Debian-native. * Added CedarBackup/release.py to contain "upstream" release number. * Added -V,--version option to cback script. * Rewrote parts of Makefile to remove most Debian-specific rules. * Changed Makefile and setup.py to get version info from release.py. * The setup.py script now references /usr/bin/env python, not python2.2. * Debian-related changes will now reside exclusively in debian/changelog. Version 1.8 14 Oct 2002 * Fix bug with the way the default mode is displayed in the help screen. Version 1.7 14 Oct 2002 * Bug fix. Upgrade to Python 2.2.2b1 exposed a flaw in my version-check code. Version 1.6 06 Oct 2002 * Debian packaging cleanup (should have been a Debian-only release 1.5-2). Version 1.5 19 Sep 2002 * Changed cback script to more closely control ownership of logfile. Version 1.4 10 Sep 2002 * Various packaging cleanups. * Fixed code that reported negative capacity on a full disc. * Now blank disc ahead of time if it needs to be blanked. * Moved to Python2.2 for cleaner packaging (True, False, etc.) Version 1.3 20 Aug 2002 * Initial "public" release. ----------------------------------------------------------------------------- vim: set ft=changelog noexpandtab: CedarBackup2-2.26.5/README0000664000175000017500000000264512555067677016471 0ustar pronovicpronovic00000000000000Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. This is release 2 of the Cedar Backup package. It represents a complete rewrite of the original package. The new code is cleaner, more compact, more focused and also more "pythonic" in its approach (although the coding style has arguably been influenced by my experiences with Java over the last few years). There is also now an extensive unit test suite, something the first release always lacked. For more information, see the Cedar Backup web site: https://bitbucket.org/cedarsolutions/cedar-backup2 CedarBackup2-2.26.5/INSTALL0000664000175000017500000000275012556154361016623 0ustar pronovicpronovic00000000000000# vim: set ft=text80: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Project : Cedar Backup, release 2 # Purpose : INSTALL instructions for package # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # This module is distributed in standard Python distutils form. Use: python setup.py --help For more information on how to install it. You must have a Python interpreter version 2.7 or better to use these modules. Some external tools are also required for certain features to work. See the user manual for more details. In the simplest case, you will probably just use: python setup.py install to install to your standard Python site-packages directory. Note that on UNIX systems, you will probably need to do this as root. The documentation and unit tests provided with this distribution are not installed by setup.py. You may put them wherever you would like. You may wish to run the unit tests before actually installing anything. Run them like so: python util/test.py If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. Please make sure to include the diagnostic information printed out at the beginning of the test run. CedarBackup2-2.26.5/CedarBackup2/0002775000175000017500000000000012642035650020011 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/CedarBackup2/cli.py0000664000175000017500000022666212642020124021134 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides command-line interface implementation. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides command-line interface implementation for the cback script. Summary ======= The functionality in this module encapsulates the command-line interface for the cback script. The cback script itself is very short, basically just an invokation of one function implemented here. That, in turn, makes it simpler to validate the command line interface (for instance, it's easier to run pychecker against a module, and unit tests are easier, too). The objects and functions implemented in this module are probably not useful to any code external to Cedar Backup. Anyone else implementing their own command-line interface would have to reimplement (or at least enhance) all of this anyway. Backwards Compatibility ======================= The command line interface has changed between Cedar Backup 1.x and Cedar Backup 2.x. Some new switches have been added, and the actions have become simple arguments rather than switches (which is a much more standard command line format). Old 1.x command lines are generally no longer valid. @var DEFAULT_CONFIG: The default configuration file. @var DEFAULT_LOGFILE: The default log file path. @var DEFAULT_OWNERSHIP: Default ownership for the logfile. @var DEFAULT_MODE: Default file permissions mode on the logfile. @var VALID_ACTIONS: List of valid actions. @var COMBINE_ACTIONS: List of actions which can be combined with other actions. @var NONCOMBINE_ACTIONS: List of actions which cannot be combined with other actions. @sort: cli, Options, DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE, VALID_ACTIONS, COMBINE_ACTIONS, NONCOMBINE_ACTIONS @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import getopt # Cedar Backup modules from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT from CedarBackup2.customize import customizeOverrides from CedarBackup2.util import DirectedGraph, PathResolverSingleton from CedarBackup2.util import sortDict, splitCommandLine, executeCommand, getFunctionReference from CedarBackup2.util import getUidGid, encodePath, Diagnostics from CedarBackup2.config import Config from CedarBackup2.peer import RemotePeer from CedarBackup2.actions.collect import executeCollect from CedarBackup2.actions.stage import executeStage from CedarBackup2.actions.store import executeStore from CedarBackup2.actions.purge import executePurge from CedarBackup2.actions.rebuild import executeRebuild from CedarBackup2.actions.validate import executeValidate from CedarBackup2.actions.initialize import executeInitialize ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.cli") DISK_LOG_FORMAT = "%(asctime)s --> [%(levelname)-7s] %(message)s" DISK_OUTPUT_FORMAT = "%(message)s" SCREEN_LOG_FORMAT = "%(message)s" SCREEN_LOG_STREAM = sys.stdout DATE_FORMAT = "%Y-%m-%dT%H:%M:%S %Z" DEFAULT_CONFIG = "/etc/cback.conf" DEFAULT_LOGFILE = "/var/log/cback.log" DEFAULT_OWNERSHIP = [ "root", "adm", ] DEFAULT_MODE = 0640 REBUILD_INDEX = 0 # can't run with anything else, anyway VALIDATE_INDEX = 0 # can't run with anything else, anyway INITIALIZE_INDEX = 0 # can't run with anything else, anyway COLLECT_INDEX = 100 STAGE_INDEX = 200 STORE_INDEX = 300 PURGE_INDEX = 400 VALID_ACTIONS = [ "collect", "stage", "store", "purge", "rebuild", "validate", "initialize", "all", ] COMBINE_ACTIONS = [ "collect", "stage", "store", "purge", ] NONCOMBINE_ACTIONS = [ "rebuild", "validate", "initialize", "all", ] SHORT_SWITCHES = "hVbqc:fMNl:o:m:OdsD" LONG_SWITCHES = [ 'help', 'version', 'verbose', 'quiet', 'config=', 'full', 'managed', 'managed-only', 'logfile=', 'owner=', 'mode=', 'output', 'debug', 'stack', 'diagnostics', ] ####################################################################### # Public functions ####################################################################### ################# # cli() function ################# def cli(): """ Implements the command-line interface for the C{cback} script. Essentially, this is the "main routine" for the cback script. It does all of the argument processing for the script, and then sets about executing the indicated actions. As a general rule, only the actions indicated on the command line will be executed. We will accept any of the built-in actions and any of the configured extended actions (which makes action list verification a two- step process). The C{'all'} action has a special meaning: it means that the built-in set of actions (collect, stage, store, purge) will all be executed, in that order. Extended actions will be ignored as part of the C{'all'} action. Raised exceptions always result in an immediate return. Otherwise, we generally return when all specified actions have been completed. Actions are ignored if the help, version or validate flags are set. A different error code is returned for each type of failure: - C{1}: The Python interpreter version is < 2.7 - C{2}: Error processing command-line arguments - C{3}: Error configuring logging - C{4}: Error parsing indicated configuration file - C{5}: Backup was interrupted with a CTRL-C or similar - C{6}: Error executing specified backup actions @note: This function contains a good amount of logging at the INFO level, because this is the right place to document high-level flow of control (i.e. what the command-line options were, what config file was being used, etc.) @note: We assume that anything that I{must} be seen on the screen is logged at the ERROR level. Errors that occur before logging can be configured are written to C{sys.stderr}. @return: Error code as described above. """ try: if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 7]: sys.stderr.write("Python 2 version 2.7 or greater required.\n") return 1 except: # sys.version_info isn't available before 2.0 sys.stderr.write("Python 2 version 2.7 or greater required.\n") return 1 try: options = Options(argumentList=sys.argv[1:]) logger.info("Specified command-line actions: %s", options.actions) except Exception, e: _usage() sys.stderr.write(" *** Error: %s\n" % e) return 2 if options.help: _usage() return 0 if options.version: _version() return 0 if options.diagnostics: _diagnostics() return 0 if options.stacktrace: logfile = setupLogging(options) else: try: logfile = setupLogging(options) except Exception as e: sys.stderr.write("Error setting up logging: %s\n" % e) return 3 logger.info("Cedar Backup run started.") logger.info("Options were [%s]", options) logger.info("Logfile is [%s]", logfile) Diagnostics().logDiagnostics(method=logger.info) if options.config is None: logger.debug("Using default configuration file.") configPath = DEFAULT_CONFIG else: logger.debug("Using user-supplied configuration file.") configPath = options.config executeLocal = True executeManaged = False if options.managedOnly: executeLocal = False executeManaged = True if options.managed: executeManaged = True logger.debug("Execute local actions: %s", executeLocal) logger.debug("Execute managed actions: %s", executeManaged) try: logger.info("Configuration path is [%s]", configPath) config = Config(xmlPath=configPath) customizeOverrides(config) setupPathResolver(config) actionSet = _ActionSet(options.actions, config.extensions, config.options, config.peers, executeManaged, executeLocal) except Exception, e: logger.error("Error reading or handling configuration: %s", e) logger.info("Cedar Backup run completed with status 4.") return 4 if options.stacktrace: actionSet.executeActions(configPath, options, config) else: try: actionSet.executeActions(configPath, options, config) except KeyboardInterrupt: logger.error("Backup interrupted.") logger.info("Cedar Backup run completed with status 5.") return 5 except Exception, e: logger.error("Error executing backup: %s", e) logger.info("Cedar Backup run completed with status 6.") return 6 logger.info("Cedar Backup run completed with status 0.") return 0 ######################################################################## # Action-related class definition ######################################################################## #################### # _ActionItem class #################### class _ActionItem(object): """ Class representing a single action to be executed. This class represents a single named action to be executed, and understands how to execute that action. The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information. This class is also where pre-action and post-action hooks are executed. An action item is instantiated in terms of optional pre- and post-action hook objects (config.ActionHook), which are then executed at the appropriate time (if set). @note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type. @cvar SORT_ORDER: Defines a sort order to order properly between types. """ SORT_ORDER = 0 def __init__(self, index, name, preHooks, postHooks, function): """ Default constructor. It's OK to pass C{None} for C{index}, C{preHooks} or C{postHooks}, but not for C{name}. @param index: Index of the item (or C{None}). @param name: Name of the action that is being executed. @param preHooks: List of pre-action hooks in terms of an C{ActionHook} object, or C{None}. @param postHooks: List of post-action hooks in terms of an C{ActionHook} object, or C{None}. @param function: Reference to function associated with item. """ self.index = index self.name = name self.preHooks = preHooks self.postHooks = postHooks self.function = function def __cmp__(self, other): """ Definition of equals operator for this class. The only thing we compare is the item's index. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.index != other.index: if self.index < other.index: return -1 else: return 1 else: if self.SORT_ORDER != other.SORT_ORDER: if self.SORT_ORDER < other.SORT_ORDER: return -1 else: return 1 return 0 def executeAction(self, configPath, options, config): """ Executes the action associated with an item, including hooks. See class notes for more details on how the action is executed. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action. @param config: Parsed configuration to be passed to action. @raise Exception: If there is a problem executing the action. """ logger.debug("Executing [%s] action.", self.name) if self.preHooks is not None: for hook in self.preHooks: self._executeHook("pre-action", hook) self._executeAction(configPath, options, config) if self.postHooks is not None: for hook in self.postHooks: self._executeHook("post-action", hook) def _executeAction(self, configPath, options, config): """ Executes the action, specifically the function associated with the action. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action. @param config: Parsed configuration to be passed to action. """ name = "%s.%s" % (self.function.__module__, self.function.__name__) logger.debug("Calling action function [%s], execution index [%d]", name, self.index) self.function(configPath, options, config) def _executeHook(self, type, hook): # pylint: disable=W0622,R0201 """ Executes a hook command via L{util.executeCommand()}. @param type: String describing the type of hook, for logging. @param hook: Hook, in terms of a C{ActionHook} object. """ fields = splitCommandLine(hook.command) logger.debug("Executing %s hook for action [%s]: %s", type, hook.action, fields[0:1]) result = executeCommand(command=fields[0:1], args=fields[1:])[0] if result != 0: raise IOError("Error (%d) executing %s hook for action [%s]: %s" % (result, type, hook.action, fields[0:1])) ########################### # _ManagedActionItem class ########################### class _ManagedActionItem(object): """ Class representing a single action to be executed on a managed peer. This class represents a single named action to be executed, and understands how to execute that action. Actions to be executed on a managed peer rely on peer configuration and on the full-backup flag. All other configuration takes place on the remote peer itself. @note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type. @cvar SORT_ORDER: Defines a sort order to order properly between types. """ SORT_ORDER = 1 def __init__(self, index, name, remotePeers): """ Default constructor. @param index: Index of the item (or C{None}). @param name: Name of the action that is being executed. @param remotePeers: List of remote peers on which to execute the action. """ self.index = index self.name = name self.remotePeers = remotePeers def __cmp__(self, other): """ Definition of equals operator for this class. The only thing we compare is the item's index. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.index != other.index: if self.index < other.index: return -1 else: return 1 else: if self.SORT_ORDER != other.SORT_ORDER: if self.SORT_ORDER < other.SORT_ORDER: return -1 else: return 1 return 0 def executeAction(self, configPath, options, config): """ Executes the managed action associated with an item. @note: Only options.full is actually used. The rest of the arguments exist to satisfy the ActionItem iterface. @note: Errors here result in a message logged to ERROR, but no thrown exception. The analogy is the stage action where a problem with one host should not kill the entire backup. Since we're logging an error, the administrator will get an email. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action. @param config: Parsed configuration to be passed to action. @raise Exception: If there is a problem executing the action. """ for peer in self.remotePeers: logger.debug("Executing managed action [%s] on peer [%s].", self.name, peer.name) try: peer.executeManagedAction(self.name, options.full) except IOError, e: logger.error(e) # log the message and go on, so we don't kill the backup ################### # _ActionSet class ################### class _ActionSet(object): """ Class representing a set of local actions to be executed. This class does four different things. First, it ensures that the actions specified on the command-line are sensible. The command-line can only list either built-in actions or extended actions specified in configuration. Also, certain actions (in L{NONCOMBINE_ACTIONS}) cannot be combined with other actions. Second, the class enforces an execution order on the specified actions. Any time actions are combined on the command line (either built-in actions or extended actions), we must make sure they get executed in a sensible order. Third, the class ensures that any pre-action or post-action hooks are scheduled and executed appropriately. Hooks are configured by building a dictionary mapping between hook action name and command. Pre-action hooks are executed immediately before their associated action, and post-action hooks are executed immediately after their associated action. Finally, the class properly interleaves local and managed actions so that the same action gets executed first locally and then on managed peers. @sort: __init__, executeActions """ def __init__(self, actions, extensions, options, peers, managed, local): """ Constructor for the C{_ActionSet} class. This is kind of ugly, because the constructor has to set up a lot of data before being able to do anything useful. The following data structures are initialized based on the input: - C{extensionNames}: List of extensions available in configuration - C{preHookMap}: Mapping from action name to list of C{PreActionHook} - C{postHookMap}: Mapping from action name to list of C{PostActionHook} - C{functionMap}: Mapping from action name to Python function - C{indexMap}: Mapping from action name to execution index - C{peerMap}: Mapping from action name to set of C{RemotePeer} - C{actionMap}: Mapping from action name to C{_ActionItem} Once these data structures are set up, the command line is validated to make sure only valid actions have been requested, and in a sensible combination. Then, all of the data is used to build C{self.actionSet}, the set action items to be executed by C{executeActions()}. This list might contain either C{_ActionItem} or C{_ManagedActionItem}. @param actions: Names of actions specified on the command-line. @param extensions: Extended action configuration (i.e. config.extensions) @param options: Options configuration (i.e. config.options) @param peers: Peers configuration (i.e. config.peers) @param managed: Whether to include managed actions in the set @param local: Whether to include local actions in the set @raise ValueError: If one of the specified actions is invalid. """ extensionNames = _ActionSet._deriveExtensionNames(extensions) (preHookMap, postHookMap) = _ActionSet._buildHookMaps(options.hooks) functionMap = _ActionSet._buildFunctionMap(extensions) indexMap = _ActionSet._buildIndexMap(extensions) peerMap = _ActionSet._buildPeerMap(options, peers) actionMap = _ActionSet._buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap) _ActionSet._validateActions(actions, extensionNames) self.actionSet = _ActionSet._buildActionSet(actions, actionMap) @staticmethod def _deriveExtensionNames(extensions): """ Builds a list of extended actions that are available in configuration. @param extensions: Extended action configuration (i.e. config.extensions) @return: List of extended action names. """ extensionNames = [] if extensions is not None and extensions.actions is not None: for action in extensions.actions: extensionNames.append(action.name) return extensionNames @staticmethod def _buildHookMaps(hooks): """ Build two mappings from action name to configured C{ActionHook}. @param hooks: List of pre- and post-action hooks (i.e. config.options.hooks) @return: Tuple of (pre hook dictionary, post hook dictionary). """ preHookMap = {} postHookMap = {} if hooks is not None: for hook in hooks: if hook.before: if not hook.action in preHookMap: preHookMap[hook.action] = [] preHookMap[hook.action].append(hook) elif hook.after: if not hook.action in postHookMap: postHookMap[hook.action] = [] postHookMap[hook.action].append(hook) return (preHookMap, postHookMap) @staticmethod def _buildFunctionMap(extensions): """ Builds a mapping from named action to action function. @param extensions: Extended action configuration (i.e. config.extensions) @return: Dictionary mapping action to function. """ functionMap = {} functionMap['rebuild'] = executeRebuild functionMap['validate'] = executeValidate functionMap['initialize'] = executeInitialize functionMap['collect'] = executeCollect functionMap['stage'] = executeStage functionMap['store'] = executeStore functionMap['purge'] = executePurge if extensions is not None and extensions.actions is not None: for action in extensions.actions: functionMap[action.name] = getFunctionReference(action.module, action.function) return functionMap @staticmethod def _buildIndexMap(extensions): """ Builds a mapping from action name to proper execution index. If extensions configuration is C{None}, or there are no configured extended actions, the ordering dictionary will only include the built-in actions and their standard indices. Otherwise, if the extensions order mode is C{None} or C{"index"}, actions will scheduled by explicit index; and if the extensions order mode is C{"dependency"}, actions will be scheduled using a dependency graph. @param extensions: Extended action configuration (i.e. config.extensions) @return: Dictionary mapping action name to integer execution index. """ indexMap = {} if extensions is None or extensions.actions is None or extensions.actions == []: logger.info("Action ordering will use 'index' order mode.") indexMap['rebuild'] = REBUILD_INDEX indexMap['validate'] = VALIDATE_INDEX indexMap['initialize'] = INITIALIZE_INDEX indexMap['collect'] = COLLECT_INDEX indexMap['stage'] = STAGE_INDEX indexMap['store'] = STORE_INDEX indexMap['purge'] = PURGE_INDEX logger.debug("Completed filling in action indices for built-in actions.") logger.info("Action order will be: %s", sortDict(indexMap)) else: if extensions.orderMode is None or extensions.orderMode == "index": logger.info("Action ordering will use 'index' order mode.") indexMap['rebuild'] = REBUILD_INDEX indexMap['validate'] = VALIDATE_INDEX indexMap['initialize'] = INITIALIZE_INDEX indexMap['collect'] = COLLECT_INDEX indexMap['stage'] = STAGE_INDEX indexMap['store'] = STORE_INDEX indexMap['purge'] = PURGE_INDEX logger.debug("Completed filling in action indices for built-in actions.") for action in extensions.actions: indexMap[action.name] = action.index logger.debug("Completed filling in action indices for extended actions.") logger.info("Action order will be: %s", sortDict(indexMap)) else: logger.info("Action ordering will use 'dependency' order mode.") graph = DirectedGraph("dependencies") graph.createVertex("rebuild") graph.createVertex("validate") graph.createVertex("initialize") graph.createVertex("collect") graph.createVertex("stage") graph.createVertex("store") graph.createVertex("purge") for action in extensions.actions: graph.createVertex(action.name) graph.createEdge("collect", "stage") # Collect must run before stage, store or purge graph.createEdge("collect", "store") graph.createEdge("collect", "purge") graph.createEdge("stage", "store") # Stage must run before store or purge graph.createEdge("stage", "purge") graph.createEdge("store", "purge") # Store must run before purge for action in extensions.actions: if action.dependencies.beforeList is not None: for vertex in action.dependencies.beforeList: try: graph.createEdge(action.name, vertex) # actions that this action must be run before except ValueError: logger.error("Dependency [%s] on extension [%s] is unknown.", vertex, action.name) raise ValueError("Unable to determine proper action order due to invalid dependency.") if action.dependencies.afterList is not None: for vertex in action.dependencies.afterList: try: graph.createEdge(vertex, action.name) # actions that this action must be run after except ValueError: logger.error("Dependency [%s] on extension [%s] is unknown.", vertex, action.name) raise ValueError("Unable to determine proper action order due to invalid dependency.") try: ordering = graph.topologicalSort() indexMap = dict([(ordering[i], i+1) for i in range(0, len(ordering))]) logger.info("Action order will be: %s", ordering) except ValueError: logger.error("Unable to determine proper action order due to dependency recursion.") logger.error("Extensions configuration is invalid (check for loops).") raise ValueError("Unable to determine proper action order due to dependency recursion.") return indexMap @staticmethod def _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap): """ Builds a mapping from action name to list of action items. We build either C{_ActionItem} or C{_ManagedActionItem} objects here. In most cases, the mapping from action name to C{_ActionItem} is 1:1. The exception is the "all" action, which is a special case. However, a list is returned in all cases, just for consistency later. Each C{_ActionItem} will be created with a proper function reference and index value for execution ordering. The mapping from action name to C{_ManagedActionItem} is always 1:1. Each managed action item contains a list of peers which the action should be executed. @param managed: Whether to include managed actions in the set @param local: Whether to include local actions in the set @param extensionNames: List of valid extended action names @param functionMap: Dictionary mapping action name to Python function @param indexMap: Dictionary mapping action name to integer execution index @param preHookMap: Dictionary mapping action name to pre hooks (if any) for the action @param postHookMap: Dictionary mapping action name to post hooks (if any) for the action @param peerMap: Dictionary mapping action name to list of remote peers on which to execute the action @return: Dictionary mapping action name to list of C{_ActionItem} objects. """ actionMap = {} for name in extensionNames + VALID_ACTIONS: if name != 'all': # do this one later function = functionMap[name] index = indexMap[name] actionMap[name] = [] if local: (preHooks, postHooks) = _ActionSet._deriveHooks(name, preHookMap, postHookMap) actionMap[name].append(_ActionItem(index, name, preHooks, postHooks, function)) if managed: if name in peerMap: actionMap[name].append(_ManagedActionItem(index, name, peerMap[name])) actionMap['all'] = actionMap['collect'] + actionMap['stage'] + actionMap['store'] + actionMap['purge'] return actionMap @staticmethod def _buildPeerMap(options, peers): """ Build a mapping from action name to list of remote peers. There will be one entry in the mapping for each managed action. If there are no managed peers, the mapping will be empty. Only managed actions will be listed in the mapping. @param options: Option configuration (i.e. config.options) @param peers: Peers configuration (i.e. config.peers) """ peerMap = {} if peers is not None: if peers.remotePeers is not None: for peer in peers.remotePeers: if peer.managed: remoteUser = _ActionSet._getRemoteUser(options, peer) rshCommand = _ActionSet._getRshCommand(options, peer) cbackCommand = _ActionSet._getCbackCommand(options, peer) managedActions = _ActionSet._getManagedActions(options, peer) remotePeer = RemotePeer(peer.name, None, options.workingDir, remoteUser, None, options.backupUser, rshCommand, cbackCommand) if managedActions is not None: for managedAction in managedActions: if managedAction in peerMap: if remotePeer not in peerMap[managedAction]: peerMap[managedAction].append(remotePeer) else: peerMap[managedAction] = [ remotePeer, ] return peerMap @staticmethod def _deriveHooks(action, preHookDict, postHookDict): """ Derive pre- and post-action hooks, if any, associated with named action. @param action: Name of action to look up @param preHookDict: Dictionary mapping pre-action hooks to action name @param postHookDict: Dictionary mapping post-action hooks to action name @return Tuple (preHooks, postHooks) per mapping, with None values if there is no hook. """ preHooks = None postHooks = None if preHookDict.has_key(action): preHooks = preHookDict[action] if postHookDict.has_key(action): postHooks = postHookDict[action] return (preHooks, postHooks) @staticmethod def _validateActions(actions, extensionNames): """ Validate that the set of specified actions is sensible. Any specified action must either be a built-in action or must be among the extended actions defined in configuration. The actions from within L{NONCOMBINE_ACTIONS} may not be combined with other actions. @param actions: Names of actions specified on the command-line. @param extensionNames: Names of extensions specified in configuration. @raise ValueError: If one or more configured actions are not valid. """ if actions is None or actions == []: raise ValueError("No actions specified.") for action in actions: if action not in VALID_ACTIONS and action not in extensionNames: raise ValueError("Action [%s] is not a valid action or extended action." % action) for action in NONCOMBINE_ACTIONS: if action in actions and actions != [ action, ]: raise ValueError("Action [%s] may not be combined with other actions." % action) @staticmethod def _buildActionSet(actions, actionMap): """ Build set of actions to be executed. The set of actions is built in the proper order, so C{executeActions} can spin through the set without thinking about it. Since we've already validated that the set of actions is sensible, we don't take any precautions here to make sure things are combined properly. If the action is listed, it will be "scheduled" for execution. @param actions: Names of actions specified on the command-line. @param actionMap: Dictionary mapping action name to C{_ActionItem} object. @return: Set of action items in proper order. """ actionSet = [] for action in actions: actionSet.extend(actionMap[action]) actionSet.sort() # sort the actions in order by index return actionSet def executeActions(self, configPath, options, config): """ Executes all actions and extended actions, in the proper order. Each action (whether built-in or extension) is executed in an identical manner. The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information. @param configPath: Path to configuration file on disk. @param options: Command-line options to be passed to action functions. @param config: Parsed configuration to be passed to action functions. @raise Exception: If there is a problem executing the actions. """ logger.debug("Executing local actions.") for actionItem in self.actionSet: actionItem.executeAction(configPath, options, config) @staticmethod def _getRemoteUser(options, remotePeer): """ Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: Name of remote user associated with remote peer. """ if remotePeer.remoteUser is None: return options.backupUser return remotePeer.remoteUser @staticmethod def _getRshCommand(options, remotePeer): """ Gets the RSH command associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: RSH command associated with remote peer. """ if remotePeer.rshCommand is None: return options.rshCommand return remotePeer.rshCommand @staticmethod def _getCbackCommand(options, remotePeer): """ Gets the cback command associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: cback command associated with remote peer. """ if remotePeer.cbackCommand is None: return options.cbackCommand return remotePeer.cbackCommand @staticmethod def _getManagedActions(options, remotePeer): """ Gets the managed actions list associated with a remote peer. Use peer's if possible, otherwise take from options section. @param options: OptionsConfig object, as from config.options @param remotePeer: Configuration-style remote peer object. @return: Set of managed actions associated with remote peer. """ if remotePeer.managedActions is None: return options.managedActions return remotePeer.managedActions ####################################################################### # Utility functions ####################################################################### #################### # _usage() function #################### def _usage(fd=sys.stderr): """ Prints usage information for the cback script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Usage: cback [switches] action(s)\n") fd.write("\n") fd.write(" The following switches are accepted:\n") fd.write("\n") fd.write(" -h, --help Display this usage/help listing\n") fd.write(" -V, --version Display version information\n") fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) fd.write(" -f, --full Perform a full backup, regardless of configuration\n") fd.write(" -M, --managed Include managed clients when executing actions\n") fd.write(" -N, --managed-only Include ONLY managed clients when executing actions\n") fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) fd.write(" -O, --output Record some sub-command (i.e. cdrecord) output to the log\n") fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") fd.write("\n") fd.write(" The following actions may be specified:\n") fd.write("\n") fd.write(" all Take all normal actions (collect, stage, store, purge)\n") fd.write(" collect Take the collect action\n") fd.write(" stage Take the stage action\n") fd.write(" store Take the store action\n") fd.write(" purge Take the purge action\n") fd.write(" rebuild Rebuild \"this week's\" disc if possible\n") fd.write(" validate Validate configuration only\n") fd.write(" initialize Initialize media for use with Cedar Backup\n") fd.write("\n") fd.write(" You may also specify extended actions that have been defined in\n") fd.write(" configuration.\n") fd.write("\n") fd.write(" You must specify at least one action to take. More than one of\n") fd.write(" the \"collect\", \"stage\", \"store\" or \"purge\" actions and/or\n") fd.write(" extended actions may be specified in any arbitrary order; they\n") fd.write(" will be executed in a sensible order. The \"all\", \"rebuild\",\n") fd.write(" \"validate\", and \"initialize\" actions may not be combined with\n") fd.write(" other actions.\n") fd.write("\n") ###################### # _version() function ###################### def _version(fd=sys.stdout): """ Prints version information for the cback script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) fd.write("\n") fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) fd.write(" See CREDITS for a list of included code and other contributors.\n") fd.write(" This is free software; there is NO warranty. See the\n") fd.write(" GNU General Public License version 2 for copying conditions.\n") fd.write("\n") fd.write(" Use the --help option for usage information.\n") fd.write("\n") ########################## # _diagnostics() function ########################## def _diagnostics(fd=sys.stdout): """ Prints runtime diagnostics information. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write("Diagnostics:\n") fd.write("\n") Diagnostics().printDiagnostics(fd=fd, prefix=" ") fd.write("\n") ########################## # setupLogging() function ########################## def setupLogging(options): """ Set up logging based on command-line options. There are two kinds of logging: flow logging and output logging. Output logging contains information about system commands executed by Cedar Backup, for instance the calls to C{mkisofs} or C{mount}, etc. Flow logging contains error and informational messages used to understand program flow. Flow log messages and output log messages are written to two different loggers target (C{CedarBackup2.log} and C{CedarBackup2.output}). Flow log messages are written at the ERROR, INFO and DEBUG log levels, while output log messages are generally only written at the INFO log level. By default, output logging is disabled. When the C{options.output} or C{options.debug} flags are set, output logging will be written to the configured logfile. Output logging is never written to the screen. By default, flow logging is enabled at the ERROR level to the screen and at the INFO level to the configured logfile. If the C{options.quiet} flag is set, flow logging is enabled at the INFO level to the configured logfile only (i.e. no output will be sent to the screen). If the C{options.verbose} flag is set, flow logging is enabled at the INFO level to both the screen and the configured logfile. If the C{options.debug} flag is set, flow logging is enabled at the DEBUG level to both the screen and the configured logfile. @param options: Command-line options. @type options: L{Options} object @return: Path to logfile on disk. """ logfile = _setupLogfile(options) _setupFlowLogging(logfile, options) _setupOutputLogging(logfile, options) return logfile def _setupLogfile(options): """ Sets up and creates logfile as needed. If the logfile already exists on disk, it will be left as-is, under the assumption that it was created with appropriate ownership and permissions. If the logfile does not exist on disk, it will be created as an empty file. Ownership and permissions will remain at their defaults unless user/group and/or mode are set in the options. We ignore errors setting the indicated user and group. @note: This function is vulnerable to a race condition. If the log file does not exist when the function is run, it will attempt to create the file as safely as possible (using C{O_CREAT}). If two processes attempt to create the file at the same time, then one of them will fail. In practice, this shouldn't really be a problem, but it might happen occassionally if two instances of cback run concurrently or if cback collides with logrotate or something. @param options: Command-line options. @return: Path to logfile on disk. """ if options.logfile is None: logfile = DEFAULT_LOGFILE else: logfile = options.logfile if not os.path.exists(logfile): mode = DEFAULT_MODE if options.mode is None else options.mode orig = os.umask(0) # Per os.open(), "When computing mode, the current umask value is first masked out" try: fd = os.open(logfile, os.O_RDWR|os.O_CREAT|os.O_APPEND, mode) with os.fdopen(fd, "a+") as f: f.write("") finally: os.umask(orig) try: if options.owner is None or len(options.owner) < 2: (uid, gid) = getUidGid(DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1]) else: (uid, gid) = getUidGid(options.owner[0], options.owner[1]) os.chown(logfile, uid, gid) except: pass return logfile def _setupFlowLogging(logfile, options): """ Sets up flow logging. @param logfile: Path to logfile on disk. @param options: Command-line options. """ flowLogger = logging.getLogger("CedarBackup2.log") flowLogger.setLevel(logging.DEBUG) # let the logger see all messages _setupDiskFlowLogging(flowLogger, logfile, options) _setupScreenFlowLogging(flowLogger, options) def _setupOutputLogging(logfile, options): """ Sets up command output logging. @param logfile: Path to logfile on disk. @param options: Command-line options. """ outputLogger = logging.getLogger("CedarBackup2.output") outputLogger.setLevel(logging.DEBUG) # let the logger see all messages _setupDiskOutputLogging(outputLogger, logfile, options) def _setupDiskFlowLogging(flowLogger, logfile, options): """ Sets up on-disk flow logging. @param flowLogger: Python flow logger object. @param logfile: Path to logfile on disk. @param options: Command-line options. """ formatter = logging.Formatter(fmt=DISK_LOG_FORMAT, datefmt=DATE_FORMAT) handler = logging.FileHandler(logfile, mode="a") handler.setFormatter(formatter) if options.debug: handler.setLevel(logging.DEBUG) else: handler.setLevel(logging.INFO) flowLogger.addHandler(handler) def _setupScreenFlowLogging(flowLogger, options): """ Sets up on-screen flow logging. @param flowLogger: Python flow logger object. @param options: Command-line options. """ formatter = logging.Formatter(fmt=SCREEN_LOG_FORMAT) handler = logging.StreamHandler(SCREEN_LOG_STREAM) handler.setFormatter(formatter) if options.quiet: handler.setLevel(logging.CRITICAL) # effectively turn it off elif options.verbose: if options.debug: handler.setLevel(logging.DEBUG) else: handler.setLevel(logging.INFO) else: handler.setLevel(logging.ERROR) flowLogger.addHandler(handler) def _setupDiskOutputLogging(outputLogger, logfile, options): """ Sets up on-disk command output logging. @param outputLogger: Python command output logger object. @param logfile: Path to logfile on disk. @param options: Command-line options. """ formatter = logging.Formatter(fmt=DISK_OUTPUT_FORMAT, datefmt=DATE_FORMAT) handler = logging.FileHandler(logfile, mode="a") handler.setFormatter(formatter) if options.debug or options.output: handler.setLevel(logging.DEBUG) else: handler.setLevel(logging.CRITICAL) # effectively turn it off outputLogger.addHandler(handler) ############################### # setupPathResolver() function ############################### def setupPathResolver(config): """ Set up the path resolver singleton based on configuration. Cedar Backup's path resolver is implemented in terms of a singleton, the L{PathResolverSingleton} class. This function takes options configuration, converts it into the dictionary form needed by the singleton, and then initializes the singleton. After that, any function that needs to resolve the path of a command can use the singleton. @param config: Configuration @type config: L{Config} object """ mapping = {} if config.options.overrides is not None: for override in config.options.overrides: mapping[override.command] = override.absolutePath singleton = PathResolverSingleton() singleton.fill(mapping) ######################################################################### # Options class definition ######################################################################## class Options(object): ###################### # Class documentation ###################### """ Class representing command-line options for the cback script. The C{Options} class is a Python object representation of the command-line options of the cback script. The object representation is two-way: a command line string or a list of command line arguments can be used to create an C{Options} object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An C{Options} object can even be created from scratch programmatically (if you have a need for that). There are two main levels of validation in the C{Options} class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's C{property} functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a C{ValueError} exception when making assignments to fields if you are programmatically filling an object. The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc. All of these post-completion validations are encapsulated in the L{Options.validate} method. This method can be called at any time by a client, and will always be called immediately after creating a C{Options} object from a command line and before exporting a C{Options} object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__ """ ############## # Constructor ############## def __init__(self, argumentList=None, argumentString=None, validate=True): """ Initializes an options object. If you initialize the object without passing either C{argumentList} or C{argumentString}, the object will be empty and will be invalid until it is filled in properly. No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. The argument list is assumed to be a list of arguments, not including the name of the command, something like C{sys.argv[1:]}. If you pass C{sys.argv} instead, things are not going to work. The argument string will be parsed into an argument list by the L{util.splitCommandLine} function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to C{sys.argv[1:]}, just like C{argumentList}. Unless the C{validate} argument is C{False}, the L{Options.validate} method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in command line, so an exception might still be raised. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback script. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid command line arguments. @param argumentList: Command line for a program. @type argumentList: List of arguments, i.e. C{sys.argv} @param argumentString: Command line for a program. @type argumentString: String, i.e. "cback --verbose stage store" @param validate: Validate the command line after parsing it. @type validate: Boolean true/false. @raise getopt.GetoptError: If the command-line arguments could not be parsed. @raise ValueError: If the command-line arguments are invalid. """ self._help = False self._version = False self._verbose = False self._quiet = False self._config = None self._full = False self._managed = False self._managedOnly = False self._logfile = None self._owner = None self._mode = None self._output = False self._debug = False self._stacktrace = False self._diagnostics = False self._actions = None self.actions = [] # initialize to an empty list; remainder are OK if argumentList is not None and argumentString is not None: raise ValueError("Use either argumentList or argumentString, but not both.") if argumentString is not None: argumentList = splitCommandLine(argumentString) if argumentList is not None: self._parseArgumentList(argumentList) if validate: self.validate() ######################### # String representations ######################### def __repr__(self): """ Official string representation for class instance. """ return self.buildArgumentString(validate=False) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() ############################# # Standard comparison method ############################# def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.help != other.help: if self.help < other.help: return -1 else: return 1 if self.version != other.version: if self.version < other.version: return -1 else: return 1 if self.verbose != other.verbose: if self.verbose < other.verbose: return -1 else: return 1 if self.quiet != other.quiet: if self.quiet < other.quiet: return -1 else: return 1 if self.config != other.config: if self.config < other.config: return -1 else: return 1 if self.full != other.full: if self.full < other.full: return -1 else: return 1 if self.managed != other.managed: if self.managed < other.managed: return -1 else: return 1 if self.managedOnly != other.managedOnly: if self.managedOnly < other.managedOnly: return -1 else: return 1 if self.logfile != other.logfile: if self.logfile < other.logfile: return -1 else: return 1 if self.owner != other.owner: if self.owner < other.owner: return -1 else: return 1 if self.mode != other.mode: if self.mode < other.mode: return -1 else: return 1 if self.output != other.output: if self.output < other.output: return -1 else: return 1 if self.debug != other.debug: if self.debug < other.debug: return -1 else: return 1 if self.stacktrace != other.stacktrace: if self.stacktrace < other.stacktrace: return -1 else: return 1 if self.diagnostics != other.diagnostics: if self.diagnostics < other.diagnostics: return -1 else: return 1 if self.actions != other.actions: if self.actions < other.actions: return -1 else: return 1 return 0 ############# # Properties ############# def _setHelp(self, value): """ Property target used to set the help flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._help = True else: self._help = False def _getHelp(self): """ Property target used to get the help flag. """ return self._help def _setVersion(self, value): """ Property target used to set the version flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._version = True else: self._version = False def _getVersion(self): """ Property target used to get the version flag. """ return self._version def _setVerbose(self, value): """ Property target used to set the verbose flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._verbose = True else: self._verbose = False def _getVerbose(self): """ Property target used to get the verbose flag. """ return self._verbose def _setQuiet(self, value): """ Property target used to set the quiet flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._quiet = True else: self._quiet = False def _getQuiet(self): """ Property target used to get the quiet flag. """ return self._quiet def _setConfig(self, value): """ Property target used to set the config parameter. """ if value is not None: if len(value) < 1: raise ValueError("The config parameter must be a non-empty string.") self._config = value def _getConfig(self): """ Property target used to get the config parameter. """ return self._config def _setFull(self, value): """ Property target used to set the full flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._full = True else: self._full = False def _getFull(self): """ Property target used to get the full flag. """ return self._full def _setManaged(self, value): """ Property target used to set the managed flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._managed = True else: self._managed = False def _getManaged(self): """ Property target used to get the managed flag. """ return self._managed def _setManagedOnly(self, value): """ Property target used to set the managedOnly flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._managedOnly = True else: self._managedOnly = False def _getManagedOnly(self): """ Property target used to get the managedOnly flag. """ return self._managedOnly def _setLogfile(self, value): """ Property target used to set the logfile parameter. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if len(value) < 1: raise ValueError("The logfile parameter must be a non-empty string.") self._logfile = encodePath(value) def _getLogfile(self): """ Property target used to get the logfile parameter. """ return self._logfile def _setOwner(self, value): """ Property target used to set the owner parameter. If not C{None}, the owner must be a C{(user,group)} tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple. @raise ValueError: If the value is not valid. """ if value is None: self._owner = None else: if isinstance(value, str): raise ValueError("Must specify user and group tuple for owner parameter.") if len(value) != 2: raise ValueError("Must specify user and group tuple for owner parameter.") if len(value[0]) < 1 or len(value[1]) < 1: raise ValueError("User and group tuple values must be non-empty strings.") self._owner = (value[0], value[1]) def _getOwner(self): """ Property target used to get the owner parameter. The parameter is a tuple of C{(user, group)}. """ return self._owner def _setMode(self, value): """ Property target used to set the mode parameter. """ if value is None: self._mode = None else: try: if isinstance(value, str): value = int(value, 8) else: value = int(value) except TypeError: raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") if value < 0: raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") self._mode = value def _getMode(self): """ Property target used to get the mode parameter. """ return self._mode def _setOutput(self, value): """ Property target used to set the output flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._output = True else: self._output = False def _getOutput(self): """ Property target used to get the output flag. """ return self._output def _setDebug(self, value): """ Property target used to set the debug flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._debug = True else: self._debug = False def _getDebug(self): """ Property target used to get the debug flag. """ return self._debug def _setStacktrace(self, value): """ Property target used to set the stacktrace flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._stacktrace = True else: self._stacktrace = False def _getStacktrace(self): """ Property target used to get the stacktrace flag. """ return self._stacktrace def _setDiagnostics(self, value): """ Property target used to set the diagnostics flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._diagnostics = True else: self._diagnostics = False def _getDiagnostics(self): """ Property target used to get the diagnostics flag. """ return self._diagnostics def _setActions(self, value): """ Property target used to set the actions list. We don't restrict the contents of actions. They're validated somewhere else. @raise ValueError: If the value is not valid. """ if value is None: self._actions = None else: try: saved = self._actions self._actions = [] self._actions.extend(value) except Exception, e: self._actions = saved raise e def _getActions(self): """ Property target used to get the actions list. """ return self._actions help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") config = property(_getConfig, _setConfig, None, "Command-line configuration file (C{-c,--config}) parameter.") full = property(_getFull, _setFull, None, "Command-line full-backup (C{-f,--full}) flag.") managed = property(_getManaged, _setManaged, None, "Command-line managed (C{-M,--managed}) flag.") managedOnly = property(_getManagedOnly, _setManagedOnly, None, "Command-line managed-only (C{-N,--managed-only}) flag.") logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") actions = property(_getActions, _setActions, None, "Command-line actions list.") ################## # Utility methods ################## def validate(self): """ Validates command-line options represented by the object. Unless C{--help} or C{--version} are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback script. @raise ValueError: If one of the validations fails. """ if not self.help and not self.version and not self.diagnostics: if self.actions is None or len(self.actions) == 0: raise ValueError("At least one action must be specified.") if self.managed and self.managedOnly: raise ValueError("The --managed and --managed-only options may not be combined.") def buildArgumentList(self, validate=True): """ Extracts options into a list of command line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the C{argumentList} parameter. Unlike L{buildArgumentString}, string arguments are not quoted here, because there is no need for it. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: List representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentList = [] if self._help: argumentList.append("--help") if self.version: argumentList.append("--version") if self.verbose: argumentList.append("--verbose") if self.quiet: argumentList.append("--quiet") if self.config is not None: argumentList.append("--config") argumentList.append(self.config) if self.full: argumentList.append("--full") if self.managed: argumentList.append("--managed") if self.managedOnly: argumentList.append("--managed-only") if self.logfile is not None: argumentList.append("--logfile") argumentList.append(self.logfile) if self.owner is not None: argumentList.append("--owner") argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) if self.mode is not None: argumentList.append("--mode") argumentList.append("%o" % self.mode) if self.output: argumentList.append("--output") if self.debug: argumentList.append("--debug") if self.stacktrace: argumentList.append("--stack") if self.diagnostics: argumentList.append("--diagnostics") if self.actions is not None: for action in self.actions: argumentList.append(action) return argumentList def buildArgumentString(self, validate=True): """ Extracts options into a string of command-line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes (C{"}). The resulting string will be suitable for passing back to the constructor in the C{argumentString} parameter. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: String representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentString = "" if self._help: argumentString += "--help " if self.version: argumentString += "--version " if self.verbose: argumentString += "--verbose " if self.quiet: argumentString += "--quiet " if self.config is not None: argumentString += "--config \"%s\" " % self.config if self.full: argumentString += "--full " if self.managed: argumentString += "--managed " if self.managedOnly: argumentString += "--managed-only " if self.logfile is not None: argumentString += "--logfile \"%s\" " % self.logfile if self.owner is not None: argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) if self.mode is not None: argumentString += "--mode %o " % self.mode if self.output: argumentString += "--output " if self.debug: argumentString += "--debug " if self.stacktrace: argumentString += "--stack " if self.diagnostics: argumentString += "--diagnostics " if self.actions is not None: for action in self.actions: argumentString += "\"%s\" " % action return argumentString def _parseArgumentList(self, argumentList): """ Internal method to parse a list of command-line arguments. Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the L{validate} method). For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used. @param argumentList: List of arguments to a command. @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} @raise ValueError: If the argument list cannot be successfully parsed. """ switches = { } opts, self.actions = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) for o, a in opts: # push the switches into a hash switches[o] = a if switches.has_key("-h") or switches.has_key("--help"): self.help = True if switches.has_key("-V") or switches.has_key("--version"): self.version = True if switches.has_key("-b") or switches.has_key("--verbose"): self.verbose = True if switches.has_key("-q") or switches.has_key("--quiet"): self.quiet = True if switches.has_key("-c"): self.config = switches["-c"] if switches.has_key("--config"): self.config = switches["--config"] if switches.has_key("-f") or switches.has_key("--full"): self.full = True if switches.has_key("-M") or switches.has_key("--managed"): self.managed = True if switches.has_key("-N") or switches.has_key("--managed-only"): self.managedOnly = True if switches.has_key("-l"): self.logfile = switches["-l"] if switches.has_key("--logfile"): self.logfile = switches["--logfile"] if switches.has_key("-o"): self.owner = switches["-o"].split(":", 1) if switches.has_key("--owner"): self.owner = switches["--owner"].split(":", 1) if switches.has_key("-m"): self.mode = switches["-m"] if switches.has_key("--mode"): self.mode = switches["--mode"] if switches.has_key("-O") or switches.has_key("--output"): self.output = True if switches.has_key("-d") or switches.has_key("--debug"): self.debug = True if switches.has_key("-s") or switches.has_key("--stack"): self.stacktrace = True if switches.has_key("-D") or switches.has_key("--diagnostics"): self.diagnostics = True ######################################################################### # Main routine ######################################################################## if __name__ == "__main__": result = cli() sys.exit(result) CedarBackup2-2.26.5/CedarBackup2/actions/0002775000175000017500000000000012642035650021451 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/CedarBackup2/actions/store.py0000664000175000017500000004236112560016766023171 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements the standard 'store' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'store' action. @sort: executeStore, writeImage, writeStoreIndicator, consistencyCheck @author: Kenneth J. Pronovici @author: Dmitry Rutsky """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import datetime import tempfile # Cedar Backup modules from CedarBackup2.filesystem import compareContents from CedarBackup2.util import isStartOfWeek from CedarBackup2.util import mount, unmount, displayBytes from CedarBackup2.actions.util import createWriter, checkMediaState, buildMediaLabel, writeIndicatorFile from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR, STORE_INDICATOR ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.store") ######################################################################## # Public functions ######################################################################## ########################## # executeStore() function ########################## def executeStore(configPath, options, config): """ Executes the store backup action. @note: The rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories. @note: When the store action is complete, we will write a store indicator to the daily staging directory we used, so it's obvious that the store action has completed. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are problems reading or writing files. """ logger.debug("Executing the 'store' action.") if sys.platform == "darwin": logger.warn("Warning: the store action is not fully supported on Mac OS X.") logger.warn("See the Cedar Backup software manual for further information.") if config.options is None or config.store is None: raise ValueError("Store configuration is not properly filled in.") if config.store.checkMedia: checkMediaState(config.store) # raises exception if media is not initialized rebuildMedia = options.full logger.debug("Rebuild media flag [%s]", rebuildMedia) todayIsStart = isStartOfWeek(config.options.startingDay) stagingDirs = _findCorrectDailyDir(options, config) writeImageBlankSafe(config, rebuildMedia, todayIsStart, config.store.blankBehavior, stagingDirs) if config.store.checkData: if sys.platform == "darwin": logger.warn("Warning: consistency check cannot be run successfully on Mac OS X.") logger.warn("See the Cedar Backup software manual for further information.") else: logger.debug("Running consistency check of media.") consistencyCheck(config, stagingDirs) writeStoreIndicator(config, stagingDirs) logger.info("Executed the 'store' action successfully.") ######################## # writeImage() function ######################## def writeImage(config, newDisc, stagingDirs): """ Builds and writes an ISO image containing the indicated stage directories. The generated image will contain each of the staging directories listed in C{stagingDirs}. The directories will be placed into the image at the root by date, so staging directory C{/opt/stage/2005/02/10} will be placed into the disc at C{/2005/02/10}. @note: This function is implemented in terms of L{writeImageBlankSafe}. The C{newDisc} flag is passed in for both C{rebuildMedia} and C{todayIsStart}. @param config: Config object. @param newDisc: Indicates whether the disc should be re-initialized @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise IOError: If there is a problem writing the image to disc. """ writeImageBlankSafe(config, newDisc, newDisc, None, stagingDirs) ################################# # writeImageBlankSafe() function ################################# def writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs): """ Builds and writes an ISO image containing the indicated stage directories. The generated image will contain each of the staging directories listed in C{stagingDirs}. The directories will be placed into the image at the root by date, so staging directory C{/opt/stage/2005/02/10} will be placed into the disc at C{/2005/02/10}. The media will always be written with a media label specific to Cedar Backup. This function is similar to L{writeImage}, but tries to implement a smarter blanking strategy. First, the media is always blanked if the C{rebuildMedia} flag is true. Then, if C{rebuildMedia} is false, blanking behavior and C{todayIsStart} come into effect:: If no blanking behavior is specified, and it is the start of the week, the disc will be blanked If blanking behavior is specified, and either the blank mode is "daily" or the blank mode is "weekly" and it is the start of the week, then the disc will be blanked if it looks like the weekly backup will not fit onto the media. Otherwise, the disc will not be blanked How do we decide whether the weekly backup will fit onto the media? That is what the blanking factor is used for. The following formula is used:: will backup fit? = (bytes available / (1 + bytes required) <= blankFactor The blanking factor will vary from setup to setup, and will probably require some experimentation to get it right. @param config: Config object. @param rebuildMedia: Indicates whether media should be rebuilt @param todayIsStart: Indicates whether today is the starting day of the week @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise IOError: If there is a problem writing the image to disc. """ mediaLabel = buildMediaLabel() writer = createWriter(config) writer.initializeImage(True, config.options.workingDir, mediaLabel) # default value for newDisc for stageDir in stagingDirs.keys(): logger.debug("Adding stage directory [%s].", stageDir) dateSuffix = stagingDirs[stageDir] writer.addImageEntry(stageDir, dateSuffix) newDisc = _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior) writer.setImageNewDisc(newDisc) writer.writeImage() def _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior): """ Gets a value for the newDisc flag based on blanking factor rules. The blanking factor rules are described above by L{writeImageBlankSafe}. @param writer: Previously configured image writer containing image entries @param rebuildMedia: Indicates whether media should be rebuilt @param todayIsStart: Indicates whether today is the starting day of the week @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior @return: newDisc flag to be set on writer. """ newDisc = False if rebuildMedia: newDisc = True logger.debug("Setting new disc flag based on rebuildMedia flag.") else: if blankBehavior is None: logger.debug("Default media blanking behavior is in effect.") if todayIsStart: newDisc = True logger.debug("Setting new disc flag based on todayIsStart.") else: # note: validation says we can assume that behavior is fully filled in if it exists at all logger.debug("Optimized media blanking behavior is in effect based on configuration.") if blankBehavior.blankMode == "daily" or (blankBehavior.blankMode == "weekly" and todayIsStart): logger.debug("New disc flag will be set based on blank factor calculation.") blankFactor = float(blankBehavior.blankFactor) logger.debug("Configured blanking factor: %.2f", blankFactor) available = writer.retrieveCapacity().bytesAvailable logger.debug("Bytes available: %s", displayBytes(available)) required = writer.getEstimatedImageSize() logger.debug("Bytes required: %s", displayBytes(required)) ratio = available / (1.0 + required) logger.debug("Calculated ratio: %.2f", ratio) newDisc = (ratio <= blankFactor) logger.debug("%.2f <= %.2f ? %s", ratio, blankFactor, newDisc) else: logger.debug("No blank factor calculation is required based on configuration.") logger.debug("New disc flag [%s].", newDisc) return newDisc ################################# # writeStoreIndicator() function ################################# def writeStoreIndicator(config, stagingDirs): """ Writes a store indicator file into staging directories. The store indicator is written into each of the staging directories when either a store or rebuild action has written the staging directory to disc. @param config: Config object. @param stagingDirs: Dictionary mapping directory path to date suffix. """ for stagingDir in stagingDirs.keys(): writeIndicatorFile(stagingDir, STORE_INDICATOR, config.options.backupUser, config.options.backupGroup) ############################## # consistencyCheck() function ############################## def consistencyCheck(config, stagingDirs): """ Runs a consistency check against media in the backup device. It seems that sometimes, it's possible to create a corrupted multisession disc (i.e. one that cannot be read) although no errors were encountered while writing the disc. This consistency check makes sure that the data read from disc matches the data that was used to create the disc. The function mounts the device at a temporary mount point in the working directory, and then compares the indicated staging directories in the staging directory and on the media. The comparison is done via functionality in C{filesystem.py}. If no exceptions are thrown, there were no problems with the consistency check. A positive confirmation of "no problems" is also written to the log with C{info} priority. @warning: The implementation of this function is very UNIX-specific. @param config: Config object. @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: If the two directories are not equivalent. @raise IOError: If there is a problem working with the media. """ logger.debug("Running consistency check.") mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) try: mount(config.store.devicePath, mountPoint, "iso9660") for stagingDir in stagingDirs.keys(): discDir = os.path.join(mountPoint, stagingDirs[stagingDir]) logger.debug("Checking [%s] vs. [%s].", stagingDir, discDir) compareContents(stagingDir, discDir, verbose=True) logger.info("Consistency check completed for [%s]. No problems found.", stagingDir) finally: unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done ######################################################################## # Private utility functions ######################################################################## ######################### # _findCorrectDailyDir() ######################### def _findCorrectDailyDir(options, config): """ Finds the correct daily staging directory to be written to disk. In Cedar Backup v1.0, we assumed that the correct staging directory matched the current date. However, that has problems. In particular, it breaks down if collect is on one side of midnite and stage is on the other, or if certain processes span midnite. For v2.0, I'm trying to be smarter. I'll first check the current day. If that directory is found, it's good enough. If it's not found, I'll look for a valid directory from the day before or day after I{which has not yet been staged, according to the stage indicator file}. The first one I find, I'll use. If I use a directory other than for the current day I{and} C{config.store.warnMidnite} is set, a warning will be put in the log. There is one exception to this rule. If the C{options.full} flag is set, then the special "span midnite" logic will be disabled and any existing store indicator will be ignored. I did this because I think that most users who run C{cback --full store} twice in a row expect the command to generate two identical discs. With the other rule in place, running that command twice in a row could result in an error ("no unstored directory exists") or could even cause a completely unexpected directory to be written to disc (if some previous day's contents had not yet been written). @note: This code is probably longer and more verbose than it needs to be, but at least it's straightforward. @param options: Options object. @param config: Config object. @return: Correct staging dir, as a dict mapping directory to date suffix. @raise IOError: If the staging directory cannot be found. """ oneDay = datetime.timedelta(days=1) today = datetime.date.today() yesterday = today - oneDay tomorrow = today + oneDay todayDate = today.strftime(DIR_TIME_FORMAT) yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) todayPath = os.path.join(config.stage.targetDir, todayDate) yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) if options.full: if os.path.isdir(todayPath) and os.path.exists(todayStageInd): logger.info("Store process will use current day's stage directory [%s]", todayPath) return { todayPath:todayDate } raise IOError("Unable to find staging directory to store (only tried today due to full option).") else: if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): logger.info("Store process will use current day's stage directory [%s]", todayPath) return { todayPath:todayDate } elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): logger.info("Store process will use previous day's stage directory [%s]", yesterdayPath) if config.store.warnMidnite: logger.warn("Warning: store process crossed midnite boundary to find data.") return { yesterdayPath:yesterdayDate } elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): logger.info("Store process will use next day's stage directory [%s]", tomorrowPath) if config.store.warnMidnite: logger.warn("Warning: store process crossed midnite boundary to find data.") return { tomorrowPath:tomorrowDate } raise IOError("Unable to find unused staging directory to store (tried today, yesterday, tomorrow).") CedarBackup2-2.26.5/CedarBackup2/actions/collect.py0000664000175000017500000005336412560016766023467 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2011 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements the standard 'collect' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'collect' action. @sort: executeCollect @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import pickle # Cedar Backup modules from CedarBackup2.filesystem import BackupFileList, FilesystemList from CedarBackup2.util import isStartOfWeek, changeOwnership, displayBytes, buildNormalizedPath from CedarBackup2.actions.constants import DIGEST_EXTENSION, COLLECT_INDICATOR from CedarBackup2.actions.util import writeIndicatorFile ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.collect") ######################################################################## # Public functions ######################################################################## ############################ # executeCollect() function ############################ def executeCollect(configPath, options, config): """ Executes the collect backup action. @note: When the collect action is complete, we will write a collect indicator to the collect directory, so it's obvious that the collect action has completed. The stage process uses this indicator to decide whether a peer is ready to be staged. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise TarError: If there is a problem creating a tar file """ logger.debug("Executing the 'collect' action.") if config.options is None or config.collect is None: raise ValueError("Collect configuration is not properly filled in.") if ((config.collect.collectFiles is None or len(config.collect.collectFiles) < 1) and (config.collect.collectDirs is None or len(config.collect.collectDirs) < 1)): raise ValueError("There must be at least one collect file or collect directory.") fullBackup = options.full logger.debug("Full backup flag is [%s]", fullBackup) todayIsStart = isStartOfWeek(config.options.startingDay) resetDigest = fullBackup or todayIsStart logger.debug("Reset digest flag is [%s]", resetDigest) if config.collect.collectFiles is not None: for collectFile in config.collect.collectFiles: logger.debug("Working with collect file [%s]", collectFile.absolutePath) collectMode = _getCollectMode(config, collectFile) archiveMode = _getArchiveMode(config, collectFile) digestPath = _getDigestPath(config, collectFile.absolutePath) tarfilePath = _getTarfilePath(config, collectFile.absolutePath, archiveMode) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("File meets criteria to be backed up today.") _collectFile(config, collectFile.absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) else: logger.debug("File will not be backed up, per collect mode.") logger.info("Completed collecting file [%s]", collectFile.absolutePath) if config.collect.collectDirs is not None: for collectDir in config.collect.collectDirs: logger.debug("Working with collect directory [%s]", collectDir.absolutePath) collectMode = _getCollectMode(config, collectDir) archiveMode = _getArchiveMode(config, collectDir) ignoreFile = _getIgnoreFile(config, collectDir) linkDepth = _getLinkDepth(collectDir) dereference = _getDereference(collectDir) recursionLevel = _getRecursionLevel(collectDir) (excludePaths, excludePatterns) = _getExclusions(config, collectDir) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("Directory meets criteria to be backed up today.") _collectDirectory(config, collectDir.absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel) else: logger.debug("Directory will not be backed up, per collect mode.") logger.info("Completed collecting directory [%s]", collectDir.absolutePath) writeIndicatorFile(config.collect.targetDir, COLLECT_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the 'collect' action successfully.") ######################################################################## # Private utility functions ######################################################################## ########################## # _collectFile() function ########################## def _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath): """ Collects a configured collect file. The indicated collect file is collected into the indicated tarfile. For files that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten). The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect file itself. @param config: Config object. @param absolutePath: Absolute path of file to collect. @param tarfilePath: Path to tarfile that should be created. @param collectMode: Collect mode to use. @param archiveMode: Archive mode to use. @param resetDigest: Reset digest flag. @param digestPath: Path to digest file on disk, if needed. """ backupList = BackupFileList() backupList.addFile(absolutePath) _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) ############################### # _collectDirectory() function ############################### def _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel): """ Collects a configured collect directory. The indicated collect directory is collected into the indicated tarfile. For directories that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten). The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect directory itself. @param config: Config object. @param absolutePath: Absolute path of directory to collect. @param collectMode: Collect mode to use. @param archiveMode: Archive mode to use. @param ignoreFile: Ignore file to use. @param linkDepth: Link depth value to use. @param dereference: Dereference flag to use. @param resetDigest: Reset digest flag. @param excludePaths: List of absolute paths to exclude. @param excludePatterns: List of patterns to exclude. @param recursionLevel: Recursion level (zero for no recursion) """ if recursionLevel == 0: # Collect the actual directory because we're at recursion level 0 logger.info("Collecting directory [%s]", absolutePath) tarfilePath = _getTarfilePath(config, absolutePath, archiveMode) digestPath = _getDigestPath(config, absolutePath) backupList = BackupFileList() backupList.ignoreFile = ignoreFile backupList.excludePaths = excludePaths backupList.excludePatterns = excludePatterns backupList.addDirContents(absolutePath, linkDepth=linkDepth, dereference=dereference) _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) else: # Find all of the immediate subdirectories subdirs = FilesystemList() subdirs.excludeFiles = True subdirs.excludeLinks = True subdirs.excludePaths = excludePaths subdirs.excludePatterns = excludePatterns subdirs.addDirContents(path=absolutePath, recursive=False, addSelf=False) # Back up the subdirectories separately for subdir in subdirs: _collectDirectory(config, subdir, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel-1) excludePaths.append(subdir) # this directory is already backed up, so exclude it # Back up everything that hasn't previously been backed up _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, 0) ############################ # _executeBackup() function ############################ def _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath): """ Execute the backup process for the indicated backup list. This function exists mainly to consolidate functionality between the L{_collectFile} and L{_collectDirectory} functions. Those functions build the backup list; this function causes the backup to execute properly and also manages usage of the digest file on disk as explained in their comments. For collect files, the digest file will always just contain the single file that is being backed up. This might little wasteful in terms of the number of files that we keep around, but it's consistent and easy to understand. @param config: Config object. @param backupList: List to execute backup for @param absolutePath: Absolute path of directory or file to collect. @param tarfilePath: Path to tarfile that should be created. @param collectMode: Collect mode to use. @param archiveMode: Archive mode to use. @param resetDigest: Reset digest flag. @param digestPath: Path to digest file on disk, if needed. """ if collectMode != 'incr': logger.debug("Collect mode is [%s]; no digest will be used.", collectMode) if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file logger.info("Backing up file [%s] (%s).", absolutePath, displayBytes(backupList.totalSize())) else: logger.info("Backing up %d files in [%s] (%s).", len(backupList), absolutePath, displayBytes(backupList.totalSize())) if len(backupList) > 0: backupList.generateTarfile(tarfilePath, archiveMode, True) changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) else: if resetDigest: logger.debug("Based on resetDigest flag, digest will be cleared.") oldDigest = {} else: logger.debug("Based on resetDigest flag, digest will loaded from disk.") oldDigest = _loadDigest(digestPath) (removed, newDigest) = backupList.removeUnchanged(oldDigest, captureDigest=True) logger.debug("Removed %d unchanged files based on digest values.", removed) if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file logger.info("Backing up file [%s] (%s).", absolutePath, displayBytes(backupList.totalSize())) else: logger.info("Backing up %d files in [%s] (%s).", len(backupList), absolutePath, displayBytes(backupList.totalSize())) if len(backupList) > 0: backupList.generateTarfile(tarfilePath, archiveMode, True) changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) _writeDigest(config, newDigest, digestPath) ######################### # _loadDigest() function ######################### def _loadDigest(digestPath): """ Loads the indicated digest path from disk into a dictionary. If we can't load the digest successfully (either because it doesn't exist or for some other reason), then an empty dictionary will be returned - but the condition will be logged. @param digestPath: Path to the digest file on disk. @return: Dictionary representing contents of digest path. """ if not os.path.isfile(digestPath): digest = {} logger.debug("Digest [%s] does not exist on disk.", digestPath) else: try: digest = pickle.load(open(digestPath, "r")) logger.debug("Loaded digest [%s] from disk: %d entries.", digestPath, len(digest)) except: digest = {} logger.error("Failed loading digest [%s] from disk.", digestPath) return digest ########################## # _writeDigest() function ########################## def _writeDigest(config, digest, digestPath): """ Writes the digest dictionary to the indicated digest path on disk. If we can't write the digest successfully for any reason, we'll log the condition but won't throw an exception. @param config: Config object. @param digest: Digest dictionary to write to disk. @param digestPath: Path to the digest file on disk. """ try: pickle.dump(digest, open(digestPath, "w")) changeOwnership(digestPath, config.options.backupUser, config.options.backupGroup) logger.debug("Wrote new digest [%s] to disk: %d entries.", digestPath, len(digest)) except: logger.error("Failed to write digest [%s] to disk.", digestPath) ######################################################################## # Private attribute "getter" functions ######################################################################## ############################ # getCollectMode() function ############################ def _getCollectMode(config, item): """ Gets the collect mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section. @param config: Config object. @param item: C{CollectFile} or C{CollectDir} object @return: Collect mode to use. """ if item.collectMode is None: collectMode = config.collect.collectMode else: collectMode = item.collectMode logger.debug("Collect mode is [%s]", collectMode) return collectMode ############################# # _getArchiveMode() function ############################# def _getArchiveMode(config, item): """ Gets the archive mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section. @param config: Config object. @param item: C{CollectFile} or C{CollectDir} object @return: Archive mode to use. """ if item.archiveMode is None: archiveMode = config.collect.archiveMode else: archiveMode = item.archiveMode logger.debug("Archive mode is [%s]", archiveMode) return archiveMode ############################ # _getIgnoreFile() function ############################ def _getIgnoreFile(config, item): """ Gets the ignore file that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section. @param config: Config object. @param item: C{CollectFile} or C{CollectDir} object @return: Ignore file to use. """ if item.ignoreFile is None: ignoreFile = config.collect.ignoreFile else: ignoreFile = item.ignoreFile logger.debug("Ignore file is [%s]", ignoreFile) return ignoreFile ############################ # _getLinkDepth() function ############################ def _getLinkDepth(item): """ Gets the link depth that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero). @param item: C{CollectDir} object @return: Link depth to use. """ if item.linkDepth is None: linkDepth = 0 else: linkDepth = item.linkDepth logger.debug("Link depth is [%d]", linkDepth) return linkDepth ############################ # _getDereference() function ############################ def _getDereference(item): """ Gets the dereference flag that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of False. @param item: C{CollectDir} object @return: Dereference flag to use. """ if item.dereference is None: dereference = False else: dereference = item.dereference logger.debug("Dereference flag is [%s]", dereference) return dereference ################################ # _getRecursionLevel() function ################################ def _getRecursionLevel(item): """ Gets the recursion level that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero). @param item: C{CollectDir} object @return: Recursion level to use. """ if item.recursionLevel is None: recursionLevel = 0 else: recursionLevel = item.recursionLevel logger.debug("Recursion level is [%d]", recursionLevel) return recursionLevel ############################ # _getDigestPath() function ############################ def _getDigestPath(config, absolutePath): """ Gets the digest path associated with a collect directory or file. @param config: Config object. @param absolutePath: Absolute path to generate digest for @return: Absolute path to the digest associated with the collect directory or file. """ normalized = buildNormalizedPath(absolutePath) filename = "%s.%s" % (normalized, DIGEST_EXTENSION) digestPath = os.path.join(config.options.workingDir, filename) logger.debug("Digest path is [%s]", digestPath) return digestPath ############################# # _getTarfilePath() function ############################# def _getTarfilePath(config, absolutePath, archiveMode): """ Gets the tarfile path (including correct extension) associated with a collect directory. @param config: Config object. @param absolutePath: Absolute path to generate tarfile for @param archiveMode: Archive mode to use for this tarfile. @return: Absolute path to the tarfile associated with the collect directory. """ if archiveMode == 'tar': extension = "tar" elif archiveMode == 'targz': extension = "tar.gz" elif archiveMode == 'tarbz2': extension = "tar.bz2" normalized = buildNormalizedPath(absolutePath) filename = "%s.%s" % (normalized, extension) tarfilePath = os.path.join(config.collect.targetDir, filename) logger.debug("Tarfile path is [%s]", tarfilePath) return tarfilePath ############################ # _getExclusions() function ############################ def _getExclusions(config, collectDir): """ Gets exclusions (file and patterns) associated with a collect directory. The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the collect configuration absolute exclude paths and the collect directory's absolute and relative exclude paths. The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the list of patterns from the collect configuration and from the collect directory itself. @param config: Config object. @param collectDir: Collect directory object. @return: Tuple (files, patterns) indicating what to exclude. """ paths = [] if config.collect.absoluteExcludePaths is not None: paths.extend(config.collect.absoluteExcludePaths) if collectDir.absoluteExcludePaths is not None: paths.extend(collectDir.absoluteExcludePaths) if collectDir.relativeExcludePaths is not None: for relativePath in collectDir.relativeExcludePaths: paths.append(os.path.join(collectDir.absolutePath, relativePath)) patterns = [] if config.collect.excludePatterns is not None: patterns.extend(config.collect.excludePatterns) if collectDir.excludePatterns is not None: patterns.extend(collectDir.excludePatterns) logger.debug("Exclude paths: %s", paths) logger.debug("Exclude patterns: %s", patterns) return(paths, patterns) CedarBackup2-2.26.5/CedarBackup2/actions/util.py0000664000175000017500000003170612642020117022775 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements action-related utilities # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements action-related utilities @sort: findDailyDirs @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import time import tempfile import logging # Cedar Backup modules from CedarBackup2.filesystem import FilesystemList from CedarBackup2.util import changeOwnership from CedarBackup2.util import deviceMounted from CedarBackup2.writers.util import readMediaLabel from CedarBackup2.writers.cdwriter import CdWriter from CedarBackup2.writers.dvdwriter import DvdWriter from CedarBackup2.writers.cdwriter import MEDIA_CDR_74, MEDIA_CDR_80, MEDIA_CDRW_74, MEDIA_CDRW_80 from CedarBackup2.writers.dvdwriter import MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW from CedarBackup2.config import DEFAULT_MEDIA_TYPE, DEFAULT_DEVICE_TYPE, REWRITABLE_MEDIA_TYPES from CedarBackup2.actions.constants import INDICATOR_PATTERN ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.util") MEDIA_LABEL_PREFIX = "CEDAR BACKUP" ######################################################################## # Public utility functions ######################################################################## ########################### # findDailyDirs() function ########################### def findDailyDirs(stagingDir, indicatorFile): """ Returns a list of all daily staging directories that do not contain the indicated indicator file. @param stagingDir: Configured staging directory (config.targetDir) @return: List of absolute paths to daily staging directories. """ results = FilesystemList() yearDirs = FilesystemList() yearDirs.excludeFiles = True yearDirs.excludeLinks = True yearDirs.addDirContents(path=stagingDir, recursive=False, addSelf=False) for yearDir in yearDirs: monthDirs = FilesystemList() monthDirs.excludeFiles = True monthDirs.excludeLinks = True monthDirs.addDirContents(path=yearDir, recursive=False, addSelf=False) for monthDir in monthDirs: dailyDirs = FilesystemList() dailyDirs.excludeFiles = True dailyDirs.excludeLinks = True dailyDirs.addDirContents(path=monthDir, recursive=False, addSelf=False) for dailyDir in dailyDirs: if os.path.exists(os.path.join(dailyDir, indicatorFile)): logger.debug("Skipping directory [%s]; contains %s.", dailyDir, indicatorFile) else: logger.debug("Adding [%s] to list of daily directories.", dailyDir) results.append(dailyDir) # just put it in the list, no fancy operations return results ########################### # createWriter() function ########################### def createWriter(config): """ Creates a writer object based on current configuration. This function creates and returns a writer based on configuration. This is done to abstract action functionality from knowing what kind of writer is in use. Since all writers implement the same interface, there's no need for actions to care which one they're working with. Currently, the C{cdwriter} and C{dvdwriter} device types are allowed. An exception will be raised if any other device type is used. This function also checks to make sure that the device isn't mounted before creating a writer object for it. Experience shows that sometimes if the device is mounted, we have problems with the backup. We may as well do the check here first, before instantiating the writer. @param config: Config object. @return: Writer that can be used to write a directory to some media. @raise ValueError: If there is a problem getting the writer. @raise IOError: If there is a problem creating the writer object. """ devicePath = config.store.devicePath deviceScsiId = config.store.deviceScsiId driveSpeed = config.store.driveSpeed noEject = config.store.noEject refreshMediaDelay = config.store.refreshMediaDelay ejectDelay = config.store.ejectDelay deviceType = _getDeviceType(config) mediaType = _getMediaType(config) if deviceMounted(devicePath): raise IOError("Device [%s] is currently mounted." % (devicePath)) if deviceType == "cdwriter": return CdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) elif deviceType == "dvdwriter": return DvdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) else: raise ValueError("Device type [%s] is invalid." % deviceType) ################################ # writeIndicatorFile() function ################################ def writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup): """ Writes an indicator file into a target directory. @param targetDir: Target directory in which to write indicator @param indicatorFile: Name of the indicator file @param backupUser: User that indicator file should be owned by @param backupGroup: Group that indicator file should be owned by @raise IOException: If there is a problem writing the indicator file """ filename = os.path.join(targetDir, indicatorFile) logger.debug("Writing indicator file [%s].", filename) try: open(filename, "w").write("") changeOwnership(filename, backupUser, backupGroup) except Exception, e: logger.error("Error writing [%s]: %s", filename, e) raise e ############################ # getBackupFiles() function ############################ def getBackupFiles(targetDir): """ Gets a list of backup files in a target directory. Files that match INDICATOR_PATTERN (i.e. C{"cback.store"}, C{"cback.stage"}, etc.) are assumed to be indicator files and are ignored. @param targetDir: Directory to look in @return: List of backup files in the directory @raise ValueError: If the target directory does not exist """ if not os.path.isdir(targetDir): raise ValueError("Target directory [%s] is not a directory or does not exist." % targetDir) fileList = FilesystemList() fileList.excludeDirs = True fileList.excludeLinks = True fileList.excludeBasenamePatterns = INDICATOR_PATTERN fileList.addDirContents(targetDir) return fileList #################### # checkMediaState() #################### def checkMediaState(storeConfig): """ Checks state of the media in the backup device to confirm whether it has been initialized for use with Cedar Backup. We can tell whether the media has been initialized by looking at its media label. If the media label starts with MEDIA_LABEL_PREFIX, then it has been initialized. The check varies depending on whether the media is rewritable or not. For non-rewritable media, we also accept a C{None} media label, since this kind of media cannot safely be initialized. @param storeConfig: Store configuration @raise ValueError: If media is not initialized. """ mediaLabel = readMediaLabel(storeConfig.devicePath) if storeConfig.mediaType in REWRITABLE_MEDIA_TYPES: if mediaLabel is None: raise ValueError("Media has not been initialized: no media label available") elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel) else: if mediaLabel is None: logger.info("Media has no media label; assuming OK since media is not rewritable.") elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel) ######################### # initializeMediaState() ######################### def initializeMediaState(config): """ Initializes state of the media in the backup device so Cedar Backup can recognize it. This is done by writing an mostly-empty image (it contains a "Cedar Backup" directory) to the media with a known media label. @note: Only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. @param config: Cedar Backup configuration @raise ValueError: If media could not be initialized. @raise ValueError: If the configured media type is not rewritable """ if not config.store.mediaType in REWRITABLE_MEDIA_TYPES: raise ValueError("Only rewritable media types can be initialized.") mediaLabel = buildMediaLabel() writer = createWriter(config) writer.refreshMedia() writer.initializeImage(True, config.options.workingDir, mediaLabel) # always create a new disc tempdir = tempfile.mkdtemp(dir=config.options.workingDir) try: writer.addImageEntry(tempdir, "CedarBackup") writer.writeImage() finally: if os.path.exists(tempdir): try: os.rmdir(tempdir) except: pass #################### # buildMediaLabel() #################### def buildMediaLabel(): """ Builds a media label to be used on Cedar Backup media. @return: Media label as a string. """ currentDate = time.strftime("%d-%b-%Y").upper() return "%s %s" % (MEDIA_LABEL_PREFIX, currentDate) ######################################################################## # Private attribute "getter" functions ######################################################################## ############################ # _getDeviceType() function ############################ def _getDeviceType(config): """ Gets the device type that should be used for storing. Use the configured device type if not C{None}, otherwise use L{config.DEFAULT_DEVICE_TYPE}. @param config: Config object. @return: Device type to be used. """ if config.store.deviceType is None: deviceType = DEFAULT_DEVICE_TYPE else: deviceType = config.store.deviceType logger.debug("Device type is [%s]", deviceType) return deviceType ########################### # _getMediaType() function ########################### def _getMediaType(config): """ Gets the media type that should be used for storing. Use the configured media type if not C{None}, otherwise use C{DEFAULT_MEDIA_TYPE}. Once we figure out what configuration value to use, we return a media type value that is valid in one of the supported writers:: MEDIA_CDR_74 MEDIA_CDRW_74 MEDIA_CDR_80 MEDIA_CDRW_80 MEDIA_DVDPLUSR MEDIA_DVDPLUSRW @param config: Config object. @return: Media type to be used as a writer media type value. @raise ValueError: If the media type is not valid. """ if config.store.mediaType is None: mediaType = DEFAULT_MEDIA_TYPE else: mediaType = config.store.mediaType if mediaType == "cdr-74": logger.debug("Media type is MEDIA_CDR_74.") return MEDIA_CDR_74 elif mediaType == "cdrw-74": logger.debug("Media type is MEDIA_CDRW_74.") return MEDIA_CDRW_74 elif mediaType == "cdr-80": logger.debug("Media type is MEDIA_CDR_80.") return MEDIA_CDR_80 elif mediaType == "cdrw-80": logger.debug("Media type is MEDIA_CDRW_80.") return MEDIA_CDRW_80 elif mediaType == "dvd+r": logger.debug("Media type is MEDIA_DVDPLUSR.") return MEDIA_DVDPLUSR elif mediaType == "dvd+rw": logger.debug("Media type is MEDIA_DVDPLUSRW.") return MEDIA_DVDPLUSRW else: raise ValueError("Media type [%s] is not valid." % mediaType) CedarBackup2-2.26.5/CedarBackup2/actions/initialize.py0000664000175000017500000000620412560016766024172 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements the standard 'initialize' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'initialize' action. @sort: executeInitialize @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging # Cedar Backup modules from CedarBackup2.actions.util import initializeMediaState ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.initialize") ######################################################################## # Public functions ######################################################################## ############################### # executeInitialize() function ############################### def executeInitialize(configPath, options, config): """ Executes the initialize action. The initialize action initializes the media currently in the writer device so that Cedar Backup can recognize it later. This is an optional step; it's only required if checkMedia is set on the store configuration. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. """ logger.debug("Executing the 'initialize' action.") if config.options is None or config.store is None: raise ValueError("Store configuration is not properly filled in.") initializeMediaState(config) logger.info("Executed the 'initialize' action successfully.") CedarBackup2-2.26.5/CedarBackup2/actions/validate.py0000664000175000017500000002713612560016766023631 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements the standard 'validate' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'validate' action. @sort: executeValidate @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging # Cedar Backup modules from CedarBackup2.util import getUidGid, getFunctionReference from CedarBackup2.actions.util import createWriter ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.validate") ######################################################################## # Public functions ######################################################################## ############################# # executeValidate() function ############################# def executeValidate(configPath, options, config): """ Executes the validate action. This action validates each of the individual sections in the config file. This is a "runtime" validation. The config file itself is already valid in a structural sense, so what we check here that is that we can actually use the configuration without any problems. There's a separate validation function for each of the configuration sections. Each validation function returns a true/false indication for whether configuration was valid, and then logs any configuration problems it finds. This way, one pass over configuration indicates most or all of the obvious problems, rather than finding just one problem at a time. Any reported problems will be logged at the ERROR level normally, or at the INFO level if the quiet flag is enabled. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: If some configuration value is invalid. """ logger.debug("Executing the 'validate' action.") if options.quiet: logfunc = logger.info # info so it goes to the log else: logfunc = logger.error # error so it goes to the screen valid = True valid &= _validateReference(config, logfunc) valid &= _validateOptions(config, logfunc) valid &= _validateCollect(config, logfunc) valid &= _validateStage(config, logfunc) valid &= _validateStore(config, logfunc) valid &= _validatePurge(config, logfunc) valid &= _validateExtensions(config, logfunc) if valid: logfunc("Configuration is valid.") else: logfunc("Configuration is not valid.") ######################################################################## # Private utility functions ######################################################################## ####################### # _checkDir() function ####################### def _checkDir(path, writable, logfunc, prefix): """ Checks that the indicated directory is OK. The path must exist, must be a directory, must be readable and executable, and must optionally be writable. @param path: Path to check. @param writable: Check that path is writable. @param logfunc: Function to use for logging errors. @param prefix: Prefix to use on logged errors. @return: True if the directory is OK, False otherwise. """ if not os.path.exists(path): logfunc("%s [%s] does not exist." % (prefix, path)) return False if not os.path.isdir(path): logfunc("%s [%s] is not a directory." % (prefix, path)) return False if not os.access(path, os.R_OK): logfunc("%s [%s] is not readable." % (prefix, path)) return False if not os.access(path, os.X_OK): logfunc("%s [%s] is not executable." % (prefix, path)) return False if writable and not os.access(path, os.W_OK): logfunc("%s [%s] is not writable." % (prefix, path)) return False return True ################################ # _validateReference() function ################################ def _validateReference(config, logfunc): """ Execute runtime validations on reference configuration. We only validate that reference configuration exists at all. @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, false otherwise. """ valid = True if config.reference is None: logfunc("Required reference configuration does not exist.") valid = False return valid ############################## # _validateOptions() function ############################## def _validateOptions(config, logfunc): """ Execute runtime validations on options configuration. The following validations are enforced: - The options section must exist - The working directory must exist and must be writable - The backup user and backup group must exist @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, false otherwise. """ valid = True if config.options is None: logfunc("Required options configuration does not exist.") valid = False else: valid &= _checkDir(config.options.workingDir, True, logfunc, "Working directory") try: getUidGid(config.options.backupUser, config.options.backupGroup) except ValueError: logfunc("Backup user:group [%s:%s] invalid." % (config.options.backupUser, config.options.backupGroup)) valid = False return valid ############################## # _validateCollect() function ############################## def _validateCollect(config, logfunc): """ Execute runtime validations on collect configuration. The following validations are enforced: - The target directory must exist and must be writable - Each of the individual collect directories must exist and must be readable @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, false otherwise. """ valid = True if config.collect is not None: valid &= _checkDir(config.collect.targetDir, True, logfunc, "Collect target directory") if config.collect.collectDirs is not None: for collectDir in config.collect.collectDirs: valid &= _checkDir(collectDir.absolutePath, False, logfunc, "Collect directory") return valid ############################ # _validateStage() function ############################ def _validateStage(config, logfunc): """ Execute runtime validations on stage configuration. The following validations are enforced: - The target directory must exist and must be writable - Each local peer's collect directory must exist and must be readable @note: We currently do not validate anything having to do with remote peers, since we don't have a straightforward way of doing it. It would require adding an rsh command rather than just an rcp command to configuration, and that just doesn't seem worth it right now. @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.stage is not None: valid &= _checkDir(config.stage.targetDir, True, logfunc, "Stage target dir ") if config.stage.localPeers is not None: for peer in config.stage.localPeers: valid &= _checkDir(peer.collectDir, False, logfunc, "Local peer collect dir ") return valid ############################ # _validateStore() function ############################ def _validateStore(config, logfunc): """ Execute runtime validations on store configuration. The following validations are enforced: - The source directory must exist and must be readable - The backup device (path and SCSI device) must be valid @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.store is not None: valid &= _checkDir(config.store.sourceDir, False, logfunc, "Store source directory") try: createWriter(config) except ValueError: logfunc("Backup device [%s] [%s] is not valid." % (config.store.devicePath, config.store.deviceScsiId)) valid = False return valid ############################ # _validatePurge() function ############################ def _validatePurge(config, logfunc): """ Execute runtime validations on purge configuration. The following validations are enforced: - Each purge directory must exist and must be writable @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.purge is not None: if config.purge.purgeDirs is not None: for purgeDir in config.purge.purgeDirs: valid &= _checkDir(purgeDir.absolutePath, True, logfunc, "Purge directory") return valid ################################# # _validateExtensions() function ################################# def _validateExtensions(config, logfunc): """ Execute runtime validations on extensions configuration. The following validations are enforced: - Each indicated extension function must exist. @param config: Program configuration. @param logfunc: Function to use for logging errors @return: True if configuration is valid, False otherwise. """ valid = True if config.extensions is not None: if config.extensions.actions is not None: for action in config.extensions.actions: try: getFunctionReference(action.module, action.function) except ImportError: logfunc("Unable to find function [%s.%s]." % (action.module, action.function)) valid = False except ValueError: logfunc("Function [%s.%s] is not callable." % (action.module, action.function)) valid = False return valid CedarBackup2-2.26.5/CedarBackup2/actions/__init__.py0000664000175000017500000000326112560016766023570 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Cedar Backup actions. This package code related to the offical Cedar Backup actions (collect, stage, store, purge, rebuild, and validate). The action modules consist of mostly "glue" code that uses other lower-level functionality to actually implement a backup. There is one module for each high-level backup action, plus a module that provides shared constants. All of the public action function implement the Cedar Backup Extension Architecture Interface, i.e. the same interface that extensions implement. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2.actions import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'constants', 'collect', 'initialize', 'stage', 'store', 'purge', 'util', 'rebuild', 'validate', ] CedarBackup2-2.26.5/CedarBackup2/actions/constants.py0000664000175000017500000000256112560016766024047 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides common constants used by standard actions. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides common constants used by standard actions. @sort: DIR_TIME_FORMAT, DIGEST_EXTENSION, INDICATOR_PATTERN, COLLECT_INDICATOR, STAGE_INDICATOR, STORE_INDICATOR @author: Kenneth J. Pronovici """ ######################################################################## # Module-wide constants and variables ######################################################################## DIR_TIME_FORMAT = "%Y/%m/%d" DIGEST_EXTENSION = "sha" INDICATOR_PATTERN = [ r"cback\..*", ] COLLECT_INDICATOR = "cback.collect" STAGE_INDICATOR = "cback.stage" STORE_INDICATOR = "cback.store" CedarBackup2-2.26.5/CedarBackup2/actions/stage.py0000664000175000017500000003031212560016766023131 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements the standard 'stage' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'stage' action. @sort: executeStage @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import time import logging # Cedar Backup modules from CedarBackup2.peer import RemotePeer, LocalPeer from CedarBackup2.util import getUidGid, changeOwnership, isStartOfWeek, isRunningAsRoot from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR from CedarBackup2.actions.util import writeIndicatorFile ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.stage") ######################################################################## # Public functions ######################################################################## ########################## # executeStage() function ########################## def executeStage(configPath, options, config): """ Executes the stage backup action. @note: The daily directory is derived once and then we stick with it, just in case a backup happens to span midnite. @note: As portions of the stage action is complete, we will write various indicator files so that it's obvious what actions have been completed. Each peer gets a stage indicator in its collect directory, and then the master gets a stage indicator in its daily staging directory. The store process uses the master's stage indicator to decide whether a directory is ready to be stored. Currently, nothing uses the indicator at each peer, and it exists for reference only. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are problems reading or writing files. """ logger.debug("Executing the 'stage' action.") if config.options is None or config.stage is None: raise ValueError("Stage configuration is not properly filled in.") dailyDir = _getDailyDir(config) localPeers = _getLocalPeers(config) remotePeers = _getRemotePeers(config) allPeers = localPeers + remotePeers stagingDirs = _createStagingDirs(config, dailyDir, allPeers) for peer in allPeers: logger.info("Staging peer [%s].", peer.name) ignoreFailures = _getIgnoreFailuresFlag(options, config, peer) if not peer.checkCollectIndicator(): if not ignoreFailures: logger.error("Peer [%s] was not ready to be staged.", peer.name) else: logger.info("Peer [%s] was not ready to be staged.", peer.name) continue logger.debug("Found collect indicator.") targetDir = stagingDirs[peer.name] if isRunningAsRoot(): # Since we're running as root, we can change ownership ownership = getUidGid(config.options.backupUser, config.options.backupGroup) logger.debug("Using target dir [%s], ownership [%d:%d].", targetDir, ownership[0], ownership[1]) else: # Non-root cannot change ownership, so don't set it ownership = None logger.debug("Using target dir [%s], ownership [None].", targetDir) try: count = peer.stagePeer(targetDir=targetDir, ownership=ownership) # note: utilize effective user's default umask logger.info("Staged %d files for peer [%s].", count, peer.name) peer.writeStageIndicator() except (ValueError, IOError, OSError), e: logger.error("Error staging [%s]: %s", peer.name, e) writeIndicatorFile(dailyDir, STAGE_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the 'stage' action successfully.") ######################################################################## # Private utility functions ######################################################################## ################################ # _createStagingDirs() function ################################ def _createStagingDirs(config, dailyDir, peers): """ Creates staging directories as required. The main staging directory is the passed in daily directory, something like C{staging/2002/05/23}. Then, individual peers get their own directories, i.e. C{staging/2002/05/23/host}. @param config: Config object. @param dailyDir: Daily staging directory. @param peers: List of all configured peers. @return: Dictionary mapping peer name to staging directory. """ mapping = {} if os.path.isdir(dailyDir): logger.warn("Staging directory [%s] already existed.", dailyDir) else: try: logger.debug("Creating staging directory [%s].", dailyDir) os.makedirs(dailyDir) for path in [ dailyDir, os.path.join(dailyDir, ".."), os.path.join(dailyDir, "..", ".."), ]: changeOwnership(path, config.options.backupUser, config.options.backupGroup) except Exception, e: raise Exception("Unable to create staging directory: %s" % e) for peer in peers: peerDir = os.path.join(dailyDir, peer.name) mapping[peer.name] = peerDir if os.path.isdir(peerDir): logger.warn("Peer staging directory [%s] already existed.", peerDir) else: try: logger.debug("Creating peer staging directory [%s].", peerDir) os.makedirs(peerDir) changeOwnership(peerDir, config.options.backupUser, config.options.backupGroup) except Exception, e: raise Exception("Unable to create staging directory: %s" % e) return mapping ######################################################################## # Private attribute "getter" functions ######################################################################## #################################### # _getIgnoreFailuresFlag() function #################################### def _getIgnoreFailuresFlag(options, config, peer): """ Gets the ignore failures flag based on options, configuration, and peer. @param options: Options object @param config: Configuration object @param peer: Peer to check @return: Whether to ignore stage failures for this peer """ logger.debug("Ignore failure mode for this peer: %s", peer.ignoreFailureMode) if peer.ignoreFailureMode is None or peer.ignoreFailureMode == "none": return False elif peer.ignoreFailureMode == "all": return True else: if options.full or isStartOfWeek(config.options.startingDay): return peer.ignoreFailureMode == "weekly" else: return peer.ignoreFailureMode == "daily" ########################## # _getDailyDir() function ########################## def _getDailyDir(config): """ Gets the daily staging directory. This is just a directory in the form C{staging/YYYY/MM/DD}, i.e. C{staging/2000/10/07}, except it will be an absolute path based on C{config.stage.targetDir}. @param config: Config object @return: Path of daily staging directory. """ dailyDir = os.path.join(config.stage.targetDir, time.strftime(DIR_TIME_FORMAT)) logger.debug("Daily staging directory is [%s].", dailyDir) return dailyDir ############################ # _getLocalPeers() function ############################ def _getLocalPeers(config): """ Return a list of L{LocalPeer} objects based on configuration. @param config: Config object. @return: List of L{LocalPeer} objects. """ localPeers = [] configPeers = None if config.stage.hasPeers(): logger.debug("Using list of local peers from stage configuration.") configPeers = config.stage.localPeers elif config.peers is not None and config.peers.hasPeers(): logger.debug("Using list of local peers from peers configuration.") configPeers = config.peers.localPeers if configPeers is not None: for peer in configPeers: localPeer = LocalPeer(peer.name, peer.collectDir, peer.ignoreFailureMode) localPeers.append(localPeer) logger.debug("Found local peer: [%s]", localPeer.name) return localPeers ############################# # _getRemotePeers() function ############################# def _getRemotePeers(config): """ Return a list of L{RemotePeer} objects based on configuration. @param config: Config object. @return: List of L{RemotePeer} objects. """ remotePeers = [] configPeers = None if config.stage.hasPeers(): logger.debug("Using list of remote peers from stage configuration.") configPeers = config.stage.remotePeers elif config.peers is not None and config.peers.hasPeers(): logger.debug("Using list of remote peers from peers configuration.") configPeers = config.peers.remotePeers if configPeers is not None: for peer in configPeers: remoteUser = _getRemoteUser(config, peer) localUser = _getLocalUser(config) rcpCommand = _getRcpCommand(config, peer) remotePeer = RemotePeer(peer.name, peer.collectDir, config.options.workingDir, remoteUser, rcpCommand, localUser, ignoreFailureMode=peer.ignoreFailureMode) remotePeers.append(remotePeer) logger.debug("Found remote peer: [%s]", remotePeer.name) return remotePeers ############################ # _getRemoteUser() function ############################ def _getRemoteUser(config, remotePeer): """ Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section. @param config: Config object. @param remotePeer: Configuration-style remote peer object. @return: Name of remote user associated with remote peer. """ if remotePeer.remoteUser is None: return config.options.backupUser return remotePeer.remoteUser ########################### # _getLocalUser() function ########################### def _getLocalUser(config): """ Gets the remote user associated with a remote peer. @param config: Config object. @return: Name of local user that should be used """ if not isRunningAsRoot(): return None return config.options.backupUser ############################ # _getRcpCommand() function ############################ def _getRcpCommand(config, remotePeer): """ Gets the RCP command associated with a remote peer. Use peer's if possible, otherwise take from options section. @param config: Config object. @param remotePeer: Configuration-style remote peer object. @return: RCP command associated with remote peer. """ if remotePeer.rcpCommand is None: return config.options.rcpCommand return remotePeer.rcpCommand CedarBackup2-2.26.5/CedarBackup2/actions/purge.py0000664000175000017500000000701312560016766023152 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements the standard 'purge' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'purge' action. @sort: executePurge @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging # Cedar Backup modules from CedarBackup2.filesystem import PurgeItemList ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.purge") ######################################################################## # Public functions ######################################################################## ########################## # executePurge() function ########################## def executePurge(configPath, options, config): """ Executes the purge backup action. For each configured directory, we create a purge item list, remove from the list anything that's younger than the configured retain days value, and then purge from the filesystem what's left. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions """ logger.debug("Executing the 'purge' action.") if config.options is None or config.purge is None: raise ValueError("Purge configuration is not properly filled in.") if config.purge.purgeDirs is not None: for purgeDir in config.purge.purgeDirs: purgeList = PurgeItemList() purgeList.addDirContents(purgeDir.absolutePath) # add everything within directory purgeList.removeYoungFiles(purgeDir.retainDays) # remove young files *from the list* so they won't be purged purgeList.purgeItems() # remove remaining items from the filesystem logger.info("Executed the 'purge' action successfully.") CedarBackup2-2.26.5/CedarBackup2/actions/rebuild.py0000664000175000017500000001427112560016766023462 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements the standard 'rebuild' action. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements the standard 'rebuild' action. @sort: executeRebuild @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import datetime # Cedar Backup modules from CedarBackup2.util import deriveDayOfWeek from CedarBackup2.actions.util import checkMediaState from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR from CedarBackup2.actions.store import writeImage, writeStoreIndicator, consistencyCheck ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.actions.rebuild") ######################################################################## # Public functions ######################################################################## ############################ # executeRebuild() function ############################ def executeRebuild(configPath, options, config): """ Executes the rebuild backup action. This function exists mainly to recreate a disc that has been "trashed" due to media or hardware problems. Note that the "stage complete" indicator isn't checked for this action. Note that the rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are problems reading or writing files. """ logger.debug("Executing the 'rebuild' action.") if sys.platform == "darwin": logger.warn("Warning: the rebuild action is not fully supported on Mac OS X.") logger.warn("See the Cedar Backup software manual for further information.") if config.options is None or config.store is None: raise ValueError("Rebuild configuration is not properly filled in.") if config.store.checkMedia: checkMediaState(config.store) # raises exception if media is not initialized stagingDirs = _findRebuildDirs(config) writeImage(config, True, stagingDirs) if config.store.checkData: if sys.platform == "darwin": logger.warn("Warning: consistency check cannot be run successfully on Mac OS X.") logger.warn("See the Cedar Backup software manual for further information.") else: logger.debug("Running consistency check of media.") consistencyCheck(config, stagingDirs) writeStoreIndicator(config, stagingDirs) logger.info("Executed the 'rebuild' action successfully.") ######################################################################## # Private utility functions ######################################################################## ############################## # _findRebuildDirs() function ############################## def _findRebuildDirs(config): """ Finds the set of directories to be included in a disc rebuild. A the rebuild action is supposed to recreate the "last week's" disc. This won't always be possible if some of the staging directories are missing. However, the general procedure is to look back into the past no further than the previous "starting day of week", and then work forward from there trying to find all of the staging directories between then and now that still exist and have a stage indicator. @param config: Config object. @return: Correct staging dir, as a dict mapping directory to date suffix. @raise IOError: If we do not find at least one staging directory. """ stagingDirs = {} start = deriveDayOfWeek(config.options.startingDay) today = datetime.date.today() if today.weekday() >= start: days = today.weekday() - start + 1 else: days = 7 - (start - today.weekday()) + 1 for i in range (0, days): currentDay = today - datetime.timedelta(days=i) dateSuffix = currentDay.strftime(DIR_TIME_FORMAT) stageDir = os.path.join(config.store.sourceDir, dateSuffix) indicator = os.path.join(stageDir, STAGE_INDICATOR) if os.path.isdir(stageDir) and os.path.exists(indicator): logger.info("Rebuild process will include stage directory [%s]", stageDir) stagingDirs[stageDir] = dateSuffix if len(stagingDirs) == 0: raise IOError("Unable to find any staging directories for rebuild process.") return stagingDirs CedarBackup2-2.26.5/CedarBackup2/writers/0002775000175000017500000000000012642035650021510 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/CedarBackup2/writers/util.py0000664000175000017500000006644612560016766023063 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides utilities related to image writers. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides utilities related to image writers. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import convertSize, UNIT_BYTES, UNIT_SECTORS, encodePath ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.writers.util") MKISOFS_COMMAND = [ "mkisofs", ] VOLNAME_COMMAND = [ "volname", ] ######################################################################## # Functions used to portably validate certain kinds of values ######################################################################## ############################ # validateDevice() function ############################ def validateDevice(device, unittest=False): """ Validates a configured device. The device must be an absolute path, must exist, and must be writable. The unittest flag turns off validation of the device on disk. @param device: Filesystem device path. @param unittest: Indicates whether we're unit testing. @return: Device as a string, for instance C{"/dev/cdrw"} @raise ValueError: If the device value is invalid. @raise ValueError: If some path cannot be encoded properly. """ if device is None: raise ValueError("Device must be filled in.") device = encodePath(device) if not os.path.isabs(device): raise ValueError("Backup device must be an absolute path.") if not unittest and not os.path.exists(device): raise ValueError("Backup device must exist on disk.") if not unittest and not os.access(device, os.W_OK): raise ValueError("Backup device is not writable by the current user.") return device ############################ # validateScsiId() function ############################ def validateScsiId(scsiId): """ Validates a SCSI id string. SCSI id must be a string in the form C{[:]scsibus,target,lun}. For Mac OS X (Darwin), we also accept the form C{IO.*Services[/N]}. @note: For consistency, if C{None} is passed in, C{None} will be returned. @param scsiId: SCSI id for the device. @return: SCSI id as a string, for instance C{"ATA:1,0,0"} @raise ValueError: If the SCSI id string is invalid. """ if scsiId is not None: pattern = re.compile(r"^\s*(.*:)?\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*$") if not pattern.search(scsiId): pattern = re.compile(r"^\s*IO.*Services(\/[0-9][0-9]*)?\s*$") if not pattern.search(scsiId): raise ValueError("SCSI id is not in a valid form.") return scsiId ################################ # validateDriveSpeed() function ################################ def validateDriveSpeed(driveSpeed): """ Validates a drive speed value. Drive speed must be an integer which is >= 1. @note: For consistency, if C{None} is passed in, C{None} will be returned. @param driveSpeed: Speed at which the drive writes. @return: Drive speed as an integer @raise ValueError: If the drive speed value is invalid. """ if driveSpeed is None: return None try: intSpeed = int(driveSpeed) except TypeError: raise ValueError("Drive speed must be an integer >= 1.") if intSpeed < 1: raise ValueError("Drive speed must an integer >= 1.") return intSpeed ######################################################################## # General writer-related utility functions ######################################################################## ############################ # readMediaLabel() function ############################ def readMediaLabel(devicePath): """ Reads the media label (volume name) from the indicated device. The volume name is read using the C{volname} command. @param devicePath: Device path to read from @return: Media label as a string, or None if there is no name or it could not be read. """ args = [ devicePath, ] command = resolveCommand(VOLNAME_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: return None if output is None or len(output) < 1: return None return output[0].rstrip() ######################################################################## # IsoImage class definition ######################################################################## class IsoImage(object): ###################### # Class documentation ###################### """ Represents an ISO filesystem image. Summary ======= This object represents an ISO 9660 filesystem image. It is implemented in terms of the C{mkisofs} program, which has been ported to many operating systems and platforms. A "sensible subset" of the C{mkisofs} functionality is made available through the public interface, allowing callers to set a variety of basic options such as publisher id, application id, etc. as well as specify exactly which files and directories they want included in their image. By default, the image is created using the Rock Ridge protocol (using the C{-r} option to C{mkisofs}) because Rock Ridge discs are generally more useful on UN*X filesystems than standard ISO 9660 images. However, callers can fall back to the default C{mkisofs} functionality by setting the C{useRockRidge} instance variable to C{False}. Note, however, that this option is not well-tested. Where Files and Directories are Placed in the Image =================================================== Although this class is implemented in terms of the C{mkisofs} program, its standard "image contents" semantics are slightly different than the original C{mkisofs} semantics. The difference is that files and directories are added to the image with some additional information about their source directory kept intact. As an example, suppose you add the file C{/etc/profile} to your image and you do not configure a graft point. The file C{/profile} will be created in the image. The behavior for directories is similar. For instance, suppose that you add C{/etc/X11} to the image and do not configure a graft point. In this case, the directory C{/X11} will be created in the image, even if the original C{/etc/X11} directory is empty. I{This behavior differs from the standard C{mkisofs} behavior!} If a graft point is configured, it will be used to modify the point at which a file or directory is added into an image. Using the examples from above, let's assume you set a graft point of C{base} when adding C{/etc/profile} and C{/etc/X11} to your image. In this case, the file C{/base/profile} and the directory C{/base/X11} would be added to the image. I feel that this behavior is more consistent than the original C{mkisofs} behavior. However, to be fair, it is not quite as flexible, and some users might not like it. For this reason, the C{contentsOnly} parameter to the L{addEntry} method can be used to revert to the original behavior if desired. @sort: __init__, addEntry, getEstimatedSize, _getEstimatedSize, writeImage, _buildDirEntries _buildGeneralArgs, _buildSizeArgs, _buildWriteArgs, device, boundaries, graftPoint, useRockRidge, applicationId, biblioFile, publisherId, preparerId, volumeId """ ############## # Constructor ############## def __init__(self, device=None, boundaries=None, graftPoint=None): """ Initializes an empty ISO image object. Only the most commonly-used configuration items can be set using this constructor. If you have a need to change the others, do so immediately after creating your object. The device and boundaries values are both required in order to write multisession discs. If either is missing or C{None}, a multisession disc will not be written. The boundaries tuple is in terms of ISO sectors, as built by an image writer class and returned in a L{writer.MediaCapacity} object. @param device: Name of the device that the image will be written to @type device: Either be a filesystem path or a SCSI address @param boundaries: Session boundaries as required by C{mkisofs} @type boundaries: Tuple C{(last_sess_start,next_sess_start)} as returned from C{cdrecord -msinfo}, or C{None} @param graftPoint: Default graft point for this page. @type graftPoint: String representing a graft point path (see L{addEntry}). """ self._device = None self._boundaries = None self._graftPoint = None self._useRockRidge = True self._applicationId = None self._biblioFile = None self._publisherId = None self._preparerId = None self._volumeId = None self.entries = { } self.device = device self.boundaries = boundaries self.graftPoint = graftPoint self.useRockRidge = True self.applicationId = None self.biblioFile = None self.publisherId = None self.preparerId = None self.volumeId = None logger.debug("Created new ISO image object.") ############# # Properties ############# def _setDevice(self, value): """ Property target used to set the device value. If not C{None}, the value can be either an absolute path or a SCSI id. @raise ValueError: If the value is not valid """ try: if value is None: self._device = None else: if os.path.isabs(value): self._device = value else: self._device = validateScsiId(value) except ValueError: raise ValueError("Device must either be an absolute path or a valid SCSI id.") def _getDevice(self): """ Property target used to get the device value. """ return self._device def _setBoundaries(self, value): """ Property target used to set the boundaries tuple. If not C{None}, the value must be a tuple of two integers. @raise ValueError: If the tuple values are not integers. @raise IndexError: If the tuple does not contain enough elements. """ if value is None: self._boundaries = None else: self._boundaries = (int(value[0]), int(value[1])) def _getBoundaries(self): """ Property target used to get the boundaries value. """ return self._boundaries def _setGraftPoint(self, value): """ Property target used to set the graft point. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The graft point must be a non-empty string.") self._graftPoint = value def _getGraftPoint(self): """ Property target used to get the graft point. """ return self._graftPoint def _setUseRockRidge(self, value): """ Property target used to set the use RockRidge flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._useRockRidge = True else: self._useRockRidge = False def _getUseRockRidge(self): """ Property target used to get the use RockRidge flag. """ return self._useRockRidge def _setApplicationId(self, value): """ Property target used to set the application id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The application id must be a non-empty string.") self._applicationId = value def _getApplicationId(self): """ Property target used to get the application id. """ return self._applicationId def _setBiblioFile(self, value): """ Property target used to set the biblio file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The biblio file must be a non-empty string.") self._biblioFile = value def _getBiblioFile(self): """ Property target used to get the biblio file. """ return self._biblioFile def _setPublisherId(self, value): """ Property target used to set the publisher id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The publisher id must be a non-empty string.") self._publisherId = value def _getPublisherId(self): """ Property target used to get the publisher id. """ return self._publisherId def _setPreparerId(self, value): """ Property target used to set the preparer id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The preparer id must be a non-empty string.") self._preparerId = value def _getPreparerId(self): """ Property target used to get the preparer id. """ return self._preparerId def _setVolumeId(self, value): """ Property target used to set the volume id. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The volume id must be a non-empty string.") self._volumeId = value def _getVolumeId(self): """ Property target used to get the volume id. """ return self._volumeId device = property(_getDevice, _setDevice, None, "Device that image will be written to (device path or SCSI id).") boundaries = property(_getBoundaries, _setBoundaries, None, "Session boundaries as required by C{mkisofs}.") graftPoint = property(_getGraftPoint, _setGraftPoint, None, "Default image-wide graft point (see L{addEntry} for details).") useRockRidge = property(_getUseRockRidge, _setUseRockRidge, None, "Indicates whether to use RockRidge (default is C{True}).") applicationId = property(_getApplicationId, _setApplicationId, None, "Optionally specifies the ISO header application id value.") biblioFile = property(_getBiblioFile, _setBiblioFile, None, "Optionally specifies the ISO bibliographic file name.") publisherId = property(_getPublisherId, _setPublisherId, None, "Optionally specifies the ISO header publisher id value.") preparerId = property(_getPreparerId, _setPreparerId, None, "Optionally specifies the ISO header preparer id value.") volumeId = property(_getVolumeId, _setVolumeId, None, "Optionally specifies the ISO header volume id value.") ######################### # General public methods ######################### def addEntry(self, path, graftPoint=None, override=False, contentsOnly=False): """ Adds an individual file or directory into the ISO image. The path must exist and must be a file or a directory. By default, the entry will be placed into the image at the root directory, but this behavior can be overridden using the C{graftPoint} parameter or instance variable. You can use the C{contentsOnly} behavior to revert to the "original" C{mkisofs} behavior for adding directories, which is to add only the items within the directory, and not the directory itself. @note: Things get I{odd} if you try to add a directory to an image that will be written to a multisession disc, and the same directory already exists in an earlier session on that disc. Not all of the data gets written. You really wouldn't want to do this anyway, I guess. @note: An exception will be thrown if the path has already been added to the image, unless the C{override} parameter is set to C{True}. @note: The method C{graftPoints} parameter overrides the object-wide instance variable. If neither the method parameter or object-wide value is set, the path will be written at the image root. The graft point behavior is determined by the value which is in effect I{at the time this method is called}, so you I{must} set the object-wide value before calling this method for the first time, or your image may not be consistent. @note: You I{cannot} use the local C{graftPoint} parameter to "turn off" an object-wide instance variable by setting it to C{None}. Python's default argument functionality buys us a lot, but it can't make this method psychic. :) @param path: File or directory to be added to the image @type path: String representing a path on disk @param graftPoint: Graft point to be used when adding this entry @type graftPoint: String representing a graft point path, as described above @param override: Override an existing entry with the same path. @type override: Boolean true/false @param contentsOnly: Add directory contents only (standard C{mkisofs} behavior). @type contentsOnly: Boolean true/false @raise ValueError: If path is not a file or directory, or does not exist. @raise ValueError: If the path has already been added, and override is not set. @raise ValueError: If a path cannot be encoded properly. """ path = encodePath(path) if not override: if path in self.entries.keys(): raise ValueError("Path has already been added to the image.") if os.path.islink(path): raise ValueError("Path must not be a link.") if os.path.isdir(path): if graftPoint is not None: if contentsOnly: self.entries[path] = graftPoint else: self.entries[path] = os.path.join(graftPoint, os.path.basename(path)) elif self.graftPoint is not None: if contentsOnly: self.entries[path] = self.graftPoint else: self.entries[path] = os.path.join(self.graftPoint, os.path.basename(path)) else: if contentsOnly: self.entries[path] = None else: self.entries[path] = os.path.basename(path) elif os.path.isfile(path): if graftPoint is not None: self.entries[path] = graftPoint elif self.graftPoint is not None: self.entries[path] = self.graftPoint else: self.entries[path] = None else: raise ValueError("Path must be a file or a directory.") def getEstimatedSize(self): """ Returns the estimated size (in bytes) of the ISO image. This is implemented via the C{-print-size} option to C{mkisofs}, so it might take a bit of time to execute. However, the result is as accurate as we can get, since it takes into account all of the ISO overhead, the true cost of directories in the structure, etc, etc. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. @raise ValueError: If there are no filesystem entries in the image """ if len(self.entries.keys()) == 0: raise ValueError("Image does not contain any entries.") return self._getEstimatedSize(self.entries) def _getEstimatedSize(self, entries): """ Returns the estimated size (in bytes) for the passed-in entries dictionary. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. """ args = self._buildSizeArgs(entries) command = resolveCommand(MKISOFS_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: raise IOError("Error (%d) executing mkisofs command to estimate size." % result) if len(output) != 1: raise IOError("Unable to parse mkisofs output.") try: sectors = float(output[0]) size = convertSize(sectors, UNIT_SECTORS, UNIT_BYTES) return size except: raise IOError("Unable to parse mkisofs output.") def writeImage(self, imagePath): """ Writes this image to disk using the image path. @param imagePath: Path to write image out as @type imagePath: String representing a path on disk @raise IOError: If there is an error writing the image to disk. @raise ValueError: If there are no filesystem entries in the image @raise ValueError: If a path cannot be encoded properly. """ imagePath = encodePath(imagePath) if len(self.entries.keys()) == 0: raise ValueError("Image does not contain any entries.") args = self._buildWriteArgs(self.entries, imagePath) command = resolveCommand(MKISOFS_COMMAND) (result, output) = executeCommand(command, args, returnOutput=False) if result != 0: raise IOError("Error (%d) executing mkisofs command to build image." % result) ######################################### # Methods used to build mkisofs commands ######################################### @staticmethod def _buildDirEntries(entries): """ Uses an entries dictionary to build a list of directory locations for use by C{mkisofs}. We build a list of entries that can be passed to C{mkisofs}. Each entry is either raw (if no graft point was configured) or in graft-point form as described above (if a graft point was configured). The dictionary keys are the path names, and the values are the graft points, if any. @param entries: Dictionary of image entries (i.e. self.entries) @return: List of directory locations for use by C{mkisofs} """ dirEntries = [] for key in entries.keys(): if entries[key] is None: dirEntries.append(key) else: dirEntries.append("%s/=%s" % (entries[key].strip("/"), key)) return dirEntries def _buildGeneralArgs(self): """ Builds a list of general arguments to be passed to a C{mkisofs} command. The various instance variables (C{applicationId}, etc.) are filled into the list of arguments if they are set. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] if self.applicationId is not None: args.append("-A") args.append(self.applicationId) if self.biblioFile is not None: args.append("-biblio") args.append(self.biblioFile) if self.publisherId is not None: args.append("-publisher") args.append(self.publisherId) if self.preparerId is not None: args.append("-p") args.append(self.preparerId) if self.volumeId is not None: args.append("-V") args.append(self.volumeId) return args def _buildSizeArgs(self, entries): """ Builds a list of arguments to be passed to a C{mkisofs} command. The various instance variables (C{applicationId}, etc.) are filled into the list of arguments if they are set. The command will be built to just return size output (a simple count of sectors via the C{-print-size} option), rather than an image file on disk. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested. @param entries: Dictionary of image entries (i.e. self.entries) @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = self._buildGeneralArgs() args.append("-print-size") args.append("-graft-points") if self.useRockRidge: args.append("-r") if self.device is not None and self.boundaries is not None: args.append("-C") args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) args.append("-M") args.append(self.device) args.extend(self._buildDirEntries(entries)) return args def _buildWriteArgs(self, entries, imagePath): """ Builds a list of arguments to be passed to a C{mkisofs} command. The various instance variables (C{applicationId}, etc.) are filled into the list of arguments if they are set. The command will be built to write an image to disk. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested. @param entries: Dictionary of image entries (i.e. self.entries) @param imagePath: Path to write image out as @type imagePath: String representing a path on disk @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = self._buildGeneralArgs() args.append("-graft-points") if self.useRockRidge: args.append("-r") args.append("-o") args.append(imagePath) if self.device is not None and self.boundaries is not None: args.append("-C") args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) args.append("-M") args.append(self.device) args.extend(self._buildDirEntries(entries)) return args CedarBackup2-2.26.5/CedarBackup2/writers/cdwriter.py0000664000175000017500000015160212642023620023702 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides functionality related to CD writer devices. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides functionality related to CD writer devices. @sort: MediaDefinition, MediaCapacity, CdWriter, MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 @var MEDIA_CDRW_74: Constant representing 74-minute CD-RW media. @var MEDIA_CDR_74: Constant representing 74-minute CD-R media. @var MEDIA_CDRW_80: Constant representing 80-minute CD-RW media. @var MEDIA_CDR_80: Constant representing 80-minute CD-R media. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging import tempfile import time # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import convertSize, displayBytes, encodePath from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES from CedarBackup2.writers.util import validateDevice, validateScsiId, validateDriveSpeed from CedarBackup2.writers.util import IsoImage ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.writers.cdwriter") MEDIA_CDRW_74 = 1 MEDIA_CDR_74 = 2 MEDIA_CDRW_80 = 3 MEDIA_CDR_80 = 4 CDRECORD_COMMAND = [ "cdrecord", ] EJECT_COMMAND = [ "eject", ] MKISOFS_COMMAND = [ "mkisofs", ] ######################################################################## # MediaDefinition class definition ######################################################################## class MediaDefinition(object): """ Class encapsulating information about CD media definitions. The following media types are accepted: - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) Note that all of the capacities associated with a media definition are in terms of ISO sectors (C{util.ISO_SECTOR_SIZE)}. @sort: __init__, mediaType, rewritable, initialLeadIn, leadIn, capacity """ def __init__(self, mediaType): """ Creates a media definition for the indicated media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ self._mediaType = None self._rewritable = False self._initialLeadIn = 0. self._leadIn = 0.0 self._capacity = 0.0 self._setValues(mediaType) def _setValues(self, mediaType): """ Sets values based on media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ if mediaType not in [MEDIA_CDR_74, MEDIA_CDRW_74, MEDIA_CDR_80, MEDIA_CDRW_80]: raise ValueError("Invalid media type %d." % mediaType) self._mediaType = mediaType self._initialLeadIn = 11400.0 # per cdrecord's documentation self._leadIn = 6900.0 # per cdrecord's documentation if self._mediaType == MEDIA_CDR_74: self._rewritable = False self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) elif self._mediaType == MEDIA_CDRW_74: self._rewritable = True self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) elif self._mediaType == MEDIA_CDR_80: self._rewritable = False self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS) elif self._mediaType == MEDIA_CDRW_80: self._rewritable = True self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS) def _getMediaType(self): """ Property target used to get the media type value. """ return self._mediaType def _getRewritable(self): """ Property target used to get the rewritable flag value. """ return self._rewritable def _getInitialLeadIn(self): """ Property target used to get the initial lead-in value. """ return self._initialLeadIn def _getLeadIn(self): """ Property target used to get the lead-in value. """ return self._leadIn def _getCapacity(self): """ Property target used to get the capacity value. """ return self._capacity mediaType = property(_getMediaType, None, None, doc="Configured media type.") rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") initialLeadIn = property(_getInitialLeadIn, None, None, doc="Initial lead-in required for first image written to media.") leadIn = property(_getLeadIn, None, None, doc="Lead-in required on successive images written to media.") capacity = property(_getCapacity, None, None, doc="Total capacity of the media before any required lead-in.") ######################################################################## # MediaCapacity class definition ######################################################################## class MediaCapacity(object): """ Class encapsulating information about CD media capacity. Space used includes the required media lead-in (unless the disk is unused). Space available attempts to provide a picture of how many bytes are available for data storage, including any required lead-in. The boundaries value is either C{None} (if multisession discs are not supported or if the disc has no boundaries) or in exactly the form provided by C{cdrecord -msinfo}. It can be passed as-is to the C{IsoImage} class. @sort: __init__, bytesUsed, bytesAvailable, boundaries, totalCapacity, utilized """ def __init__(self, bytesUsed, bytesAvailable, boundaries): """ Initializes a capacity object. @raise IndexError: If the boundaries tuple does not have enough elements. @raise ValueError: If the boundaries values are not integers. @raise ValueError: If the bytes used and available values are not floats. """ self._bytesUsed = float(bytesUsed) self._bytesAvailable = float(bytesAvailable) if boundaries is None: self._boundaries = None else: self._boundaries = (int(boundaries[0]), int(boundaries[1])) def __str__(self): """ Informal string representation for class instance. """ return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized) def _getBytesUsed(self): """ Property target to get the bytes-used value. """ return self._bytesUsed def _getBytesAvailable(self): """ Property target to get the bytes-available value. """ return self._bytesAvailable def _getBoundaries(self): """ Property target to get the boundaries tuple. """ return self._boundaries def _getTotalCapacity(self): """ Property target to get the total capacity (used + available). """ return self.bytesUsed + self.bytesAvailable def _getUtilized(self): """ Property target to get the percent of capacity which is utilized. """ if self.bytesAvailable <= 0.0: return 100.0 elif self.bytesUsed <= 0.0: return 0.0 return (self.bytesUsed / self.totalCapacity) * 100.0 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") boundaries = property(_getBoundaries, None, None, doc="Session disc boundaries, in terms of ISO sectors.") totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.") ######################################################################## # _ImageProperties class definition ######################################################################## class _ImageProperties(object): """ Simple value object to hold image properties for C{DvdWriter}. """ def __init__(self): self.newDisc = False self.tmpdir = None self.mediaLabel = None self.entries = None # dict mapping path to graft point ######################################################################## # CdWriter class definition ######################################################################## class CdWriter(object): ###################### # Class documentation ###################### """ Class representing a device that knows how to write CD media. Summary ======= This is a class representing a device that knows how to write CD media. It provides common operations for the device, such as ejecting the media, writing an ISO image to the media, or checking for the current media capacity. It also provides a place to store device attributes, such as whether the device supports writing multisession discs, etc. This class is implemented in terms of the C{eject} and C{cdrecord} programs, both of which should be available on most UN*X platforms. Image Writer Interface ====================== The following methods make up the "image writer" interface shared with other kinds of writers (such as DVD writers):: __init__ initializeImage() addImageEntry() writeImage() setImageNewDisc() retrieveCapacity() getEstimatedImageSize() Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer. The media attribute is also assumed to be available. Media Types =========== This class knows how to write to two different kinds of media, represented by the following constants: - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) Most hardware can read and write both 74-minute and 80-minute CD-R and CD-RW media. Some older drives may only be able to write CD-R media. The difference between the two is that CD-RW media can be rewritten (erased), while CD-R media cannot be. I do not support any other configurations for a couple of reasons. The first is that I've never tested any other kind of media. The second is that anything other than 74 or 80 minute is apparently non-standard. Device Attributes vs. Media Attributes ====================================== A given writer instance has two different kinds of attributes associated with it, which I call device attributes and media attributes. Device attributes are things which can be determined without looking at the media, such as whether the drive supports writing multisession disks or has a tray. Media attributes are attributes which vary depending on the state of the media, such as the remaining capacity on a disc. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls. Talking to Hardware =================== This class needs to talk to CD writer hardware in two different ways: through cdrecord to actually write to the media, and through the filesystem to do things like open and close the tray. Historically, CdWriter has interacted with cdrecord using the scsiId attribute, and with most other utilities using the device attribute. This changed somewhat in Cedar Backup 2.9.0. When Cedar Backup was first written, the only way to interact with cdrecord was by using a SCSI device id. IDE devices were mapped to pseudo-SCSI devices through the kernel. Later, extended SCSI "methods" arrived, and it became common to see C{ATA:1,0,0} or C{ATAPI:0,0,0} as a way to address IDE hardware. By late 2006, C{ATA} and C{ATAPI} had apparently been deprecated in favor of just addressing the IDE device directly by name, i.e. C{/dev/cdrw}. Because of this latest development, it no longer makes sense to require a CdWriter to be created with a SCSI id -- there might not be one. So, the passed-in SCSI id is now optional. Also, there is now a hardwareId attribute. This attribute is filled in with either the SCSI id (if provided) or the device (otherwise). The hardware id is the value that will be passed to cdrecord in the C{dev=} argument. Testing ======= It's rather difficult to test this code in an automated fashion, even if you have access to a physical CD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, much of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all. @sort: __init__, isRewritable, _retrieveProperties, retrieveCapacity, _getBoundaries, _calculateCapacity, openTray, closeTray, refreshMedia, writeImage, _blankMedia, _parsePropertiesOutput, _parseBoundariesOutput, _buildOpenTrayArgs, _buildCloseTrayArgs, _buildPropertiesArgs, _buildBoundariesArgs, _buildBlankArgs, _buildWriteArgs, device, scsiId, hardwareId, driveSpeed, media, deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject, initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize """ ############## # Constructor ############## def __init__(self, device, scsiId=None, driveSpeed=None, mediaType=MEDIA_CDRW_74, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False): """ Initializes a CD writer object. The current user must have write access to the device at the time the object is instantiated, or an exception will be thrown. However, no media-related validation is done, and in fact there is no need for any media to be in the drive until one of the other media attribute-related methods is called. The various instance variables such as C{deviceType}, C{deviceVendor}, etc. might be C{None}, if we're unable to parse this specific information from the C{cdrecord} output. This information is just for reference. The SCSI id is optional, but the device path is required. If the SCSI id is passed in, then the hardware id attribute will be taken from the SCSI id. Otherwise, the hardware id will be taken from the device. If cdrecord improperly detects whether your writer device has a tray and can be safely opened and closed, then pass in C{noEject=False}. This will override the properties and the device will never be ejected. @note: The C{unittest} parameter should never be set to C{True} outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose. @param device: Filesystem device associated with this writer. @type device: Absolute path to a filesystem device, i.e. C{/dev/cdrw} @param scsiId: SCSI id for the device (optional). @type scsiId: If provided, SCSI id in the form C{[:]scsibus,target,lun} @param driveSpeed: Speed at which the drive writes. @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. @param mediaType: Type of the media that is assumed to be in the drive. @type mediaType: One of the valid media type as discussed above. @param noEject: Overrides properties to indicate that the device does not support eject. @type noEject: Boolean true/false @param refreshMediaDelay: Refresh media delay to use, if any @type refreshMediaDelay: Number of seconds, an integer >= 0 @param ejectDelay: Eject delay to use, if any @type ejectDelay: Number of seconds, an integer >= 0 @param unittest: Turns off certain validations, for use in unit testing. @type unittest: Boolean true/false @raise ValueError: If the device is not valid for some reason. @raise ValueError: If the SCSI id is not in a valid form. @raise ValueError: If the drive speed is not an integer >= 1. @raise IOError: If device properties could not be read for some reason. """ self._image = None # optionally filled in by initializeImage() self._device = validateDevice(device, unittest) self._scsiId = validateScsiId(scsiId) self._driveSpeed = validateDriveSpeed(driveSpeed) self._media = MediaDefinition(mediaType) self._noEject = noEject self._refreshMediaDelay = refreshMediaDelay self._ejectDelay = ejectDelay if not unittest: (self._deviceType, self._deviceVendor, self._deviceId, self._deviceBufferSize, self._deviceSupportsMulti, self._deviceHasTray, self._deviceCanEject) = self._retrieveProperties() ############# # Properties ############# def _getDevice(self): """ Property target used to get the device value. """ return self._device def _getScsiId(self): """ Property target used to get the SCSI id value. """ return self._scsiId def _getHardwareId(self): """ Property target used to get the hardware id value. """ if self._scsiId is None: return self._device return self._scsiId def _getDriveSpeed(self): """ Property target used to get the drive speed. """ return self._driveSpeed def _getMedia(self): """ Property target used to get the media description. """ return self._media def _getDeviceType(self): """ Property target used to get the device type. """ return self._deviceType def _getDeviceVendor(self): """ Property target used to get the device vendor. """ return self._deviceVendor def _getDeviceId(self): """ Property target used to get the device id. """ return self._deviceId def _getDeviceBufferSize(self): """ Property target used to get the device buffer size. """ return self._deviceBufferSize def _getDeviceSupportsMulti(self): """ Property target used to get the device-support-multi flag. """ return self._deviceSupportsMulti def _getDeviceHasTray(self): """ Property target used to get the device-has-tray flag. """ return self._deviceHasTray def _getDeviceCanEject(self): """ Property target used to get the device-can-eject flag. """ return self._deviceCanEject def _getRefreshMediaDelay(self): """ Property target used to get the configured refresh media delay, in seconds. """ return self._refreshMediaDelay def _getEjectDelay(self): """ Property target used to get the configured eject delay, in seconds. """ return self._ejectDelay device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") scsiId = property(_getScsiId, None, None, doc="SCSI id for the device, in the form C{[:]scsibus,target,lun}.") hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer, either SCSI id or device path.") driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") deviceType = property(_getDeviceType, None, None, doc="Type of the device, as returned from C{cdrecord -prcap}.") deviceVendor = property(_getDeviceVendor, None, None, doc="Vendor of the device, as returned from C{cdrecord -prcap}.") deviceId = property(_getDeviceId, None, None, doc="Device identification, as returned from C{cdrecord -prcap}.") deviceBufferSize = property(_getDeviceBufferSize, None, None, doc="Size of the device's write buffer, in bytes.") deviceSupportsMulti = property(_getDeviceSupportsMulti, None, None, doc="Indicates whether device supports multisession discs.") deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") ################################################# # Methods related to device and media attributes ################################################# def isRewritable(self): """Indicates whether the media is rewritable per configuration.""" return self._media.rewritable def _retrieveProperties(self): """ Retrieves properties for a device from C{cdrecord}. The results are returned as a tuple of the object device attributes as returned from L{_parsePropertiesOutput}: C{(deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject)}. @return: Results tuple as described above. @raise IOError: If there is a problem talking to the device. """ args = CdWriter._buildPropertiesArgs(self.hardwareId) command = resolveCommand(CDRECORD_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: raise IOError("Error (%d) executing cdrecord command to get properties." % result) return CdWriter._parsePropertiesOutput(output) def retrieveCapacity(self, entireDisc=False, useMulti=True): """ Retrieves capacity for the current media in terms of a C{MediaCapacity} object. If C{entireDisc} is passed in as C{True} the capacity will be for the entire disc, as if it were to be rewritten from scratch. If the drive does not support writing multisession discs or if C{useMulti} is passed in as C{False}, the capacity will also be as if the disc were to be rewritten from scratch, but the indicated boundaries value will be C{None}. The same will happen if the disc cannot be read for some reason. Otherwise, the capacity (including the boundaries) will represent whatever space remains on the disc to be filled by future sessions. @param entireDisc: Indicates whether to return capacity for entire disc. @type entireDisc: Boolean true/false @param useMulti: Indicates whether a multisession disc should be assumed, if possible. @type useMulti: Boolean true/false @return: C{MediaCapacity} object describing the capacity of the media. @raise IOError: If the media could not be read for some reason. """ boundaries = self._getBoundaries(entireDisc, useMulti) return CdWriter._calculateCapacity(self._media, boundaries) def _getBoundaries(self, entireDisc=False, useMulti=True): """ Gets the ISO boundaries for the media. If C{entireDisc} is passed in as C{True} the boundaries will be C{None}, as if the disc were to be rewritten from scratch. If the drive does not support writing multisession discs, the returned value will be C{None}. The same will happen if the disc can't be read for some reason. Otherwise, the returned value will be represent the boundaries of the disc's current contents. The results are returned as a tuple of (lower, upper) as needed by the C{IsoImage} class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however. @param entireDisc: Indicates whether to return capacity for entire disc. @type entireDisc: Boolean true/false @param useMulti: Indicates whether a multisession disc should be assumed, if possible. @type useMulti: Boolean true/false @return: Boundaries tuple or C{None}, as described above. @raise IOError: If the media could not be read for some reason. """ if not self._deviceSupportsMulti: logger.debug("Device does not support multisession discs; returning boundaries None.") return None elif not useMulti: logger.debug("Use multisession flag is False; returning boundaries None.") return None elif entireDisc: logger.debug("Entire disc flag is True; returning boundaries None.") return None else: args = CdWriter._buildBoundariesArgs(self.hardwareId) command = resolveCommand(CDRECORD_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: logger.debug("Error (%d) executing cdrecord command to get capacity.", result) logger.warn("Unable to read disc (might not be initialized); returning boundaries of None.") return None boundaries = CdWriter._parseBoundariesOutput(output) if boundaries is None: logger.debug("Returning disc boundaries: None") else: logger.debug("Returning disc boundaries: (%d, %d)", boundaries[0], boundaries[1]) return boundaries @staticmethod def _calculateCapacity(media, boundaries): """ Calculates capacity for the media in terms of boundaries. If C{boundaries} is C{None} or the lower bound is 0 (zero), then the capacity will be for the entire disc minus the initial lead in. Otherwise, capacity will be as if the caller wanted to add an additional session to the end of the existing data on the disc. @param media: MediaDescription object describing the media capacity. @param boundaries: Session boundaries as returned from L{_getBoundaries}. @return: C{MediaCapacity} object describing the capacity of the media. """ if boundaries is None or boundaries[1] == 0: logger.debug("Capacity calculations are based on a complete disc rewrite.") sectorsAvailable = media.capacity - media.initialLeadIn if sectorsAvailable < 0: sectorsAvailable = 0.0 bytesUsed = 0.0 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) else: logger.debug("Capacity calculations are based on a new ISO session.") sectorsAvailable = media.capacity - boundaries[1] - media.leadIn if sectorsAvailable < 0: sectorsAvailable = 0.0 bytesUsed = convertSize(boundaries[1], UNIT_SECTORS, UNIT_BYTES) bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) logger.debug("Used [%s], available [%s].", displayBytes(bytesUsed), displayBytes(bytesAvailable)) return MediaCapacity(bytesUsed, bytesAvailable, boundaries) ####################################################### # Methods used for working with the internal ISO image ####################################################### def initializeImage(self, newDisc, tmpdir, mediaLabel=None): """ Initializes the writer's associated ISO image. This method initializes the C{image} instance variable so that the caller can use the C{addImageEntry} method. Once entries have been added, the C{writeImage} method can be called with no arguments. @param newDisc: Indicates whether the disc should be re-initialized @type newDisc: Boolean true/false. @param tmpdir: Temporary directory to use if needed @type tmpdir: String representing a directory path on disk @param mediaLabel: Media label to be applied to the image, if any @type mediaLabel: String, no more than 25 characters long """ self._image = _ImageProperties() self._image.newDisc = newDisc self._image.tmpdir = encodePath(tmpdir) self._image.mediaLabel = mediaLabel self._image.entries = {} # mapping from path to graft point (if any) def addImageEntry(self, path, graftPoint): """ Adds a filepath entry to the writer's associated ISO image. The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass C{None}. @note: Before calling this method, you must call L{initializeImage}. @param path: File or directory to be added to the image @type path: String representing a path on disk @param graftPoint: Graft point to be used when adding this entry @type graftPoint: String representing a graft point path, as described above @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") if not os.path.exists(path): raise ValueError("Path [%s] does not exist." % path) self._image.entries[path] = graftPoint def setImageNewDisc(self, newDisc): """ Resets (overrides) the newDisc flag on the internal image. @param newDisc: New disc flag to set @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") self._image.newDisc = newDisc def getEstimatedImageSize(self): """ Gets the estimated size of the image associated with the writer. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") image = IsoImage() for path in self._image.entries.keys(): image.addEntry(path, self._image.entries[path], override=False, contentsOnly=True) return image.getEstimatedSize() ###################################### # Methods which expose device actions ###################################### def openTray(self): """ Opens the device's tray and leaves it open. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. If the writer was constructed with C{noEject=True}, then this is a no-op. Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag. Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy. @raise IOError: If there is an error talking to the device. """ if not self._noEject: if self._deviceHasTray and self._deviceCanEject: args = CdWriter._buildOpenTrayArgs(self._device) result = executeCommand(EJECT_COMMAND, args)[0] if result != 0: logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") self.unlockTray() result = executeCommand(EJECT_COMMAND, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) logger.debug("Kludge was apparently successful.") if self.ejectDelay is not None: logger.debug("Per configuration, sleeping %d seconds after opening tray.", self.ejectDelay) time.sleep(self.ejectDelay) def unlockTray(self): """ Unlocks the device's tray. @raise IOError: If there is an error talking to the device. """ args = CdWriter._buildUnlockTrayArgs(self._device) command = resolveCommand(EJECT_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to unlock tray." % result) def closeTray(self): """ Closes the device's tray. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. If the writer was constructed with C{noEject=True}, then this is a no-op. @raise IOError: If there is an error talking to the device. """ if not self._noEject: if self._deviceHasTray and self._deviceCanEject: args = CdWriter._buildCloseTrayArgs(self._device) command = resolveCommand(EJECT_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to close tray." % result) def refreshMedia(self): """ Opens and then immediately closes the device's tray, to refresh the device's idea of the media. Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.) This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though. @raise IOError: If there is an error talking to the device. """ self.openTray() self.closeTray() self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! if self.refreshMediaDelay is not None: logger.debug("Per configuration, sleeping %d seconds to stabilize media state.", self.refreshMediaDelay) time.sleep(self.refreshMediaDelay) logger.debug("Media refresh complete; hopefully media state is stable now.") def writeImage(self, imagePath=None, newDisc=False, writeMulti=True): """ Writes an ISO image to the media in the device. If C{newDisc} is passed in as C{True}, we assume that the entire disc will be overwritten, and the media will be blanked before writing it if possible (i.e. if the media is rewritable). If C{writeMulti} is passed in as C{True}, then a multisession disc will be written if possible (i.e. if the drive supports writing multisession discs). if C{imagePath} is passed in as C{None}, then the existing image configured with C{initializeImage} will be used. Under these circumstances, the passed-in C{newDisc} flag will be ignored. By default, we assume that the disc can be written multisession and that we should append to the current contents of the disc. In any case, the ISO image must be generated appropriately (i.e. must take into account any existing session boundaries, etc.) @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image @type imagePath: String representing a path on disk @param newDisc: Indicates whether the entire disc will overwritten. @type newDisc: Boolean true/false. @param writeMulti: Indicates whether a multisession disc should be written, if possible. @type writeMulti: Boolean true/false @raise ValueError: If the image path is not absolute. @raise ValueError: If some path cannot be encoded properly. @raise IOError: If the media could not be written to for some reason. @raise ValueError: If no image is passed in and initializeImage() was not previously called """ if imagePath is None: if self._image is None: raise ValueError("Must call initializeImage() before using this method with no image path.") try: imagePath = self._createImage() self._writeImage(imagePath, writeMulti, self._image.newDisc) finally: if imagePath is not None and os.path.exists(imagePath): try: os.unlink(imagePath) except: pass else: imagePath = encodePath(imagePath) if not os.path.isabs(imagePath): raise ValueError("Image path must be absolute.") self._writeImage(imagePath, writeMulti, newDisc) def _createImage(self): """ Creates an ISO image based on configuration in self._image. @return: Path to the newly-created ISO image on disk. @raise IOError: If there is an error writing the image to disk. @raise ValueError: If there are no filesystem entries in the image @raise ValueError: If a path cannot be encoded properly. """ path = None capacity = self.retrieveCapacity(entireDisc=self._image.newDisc) image = IsoImage(self.device, capacity.boundaries) image.volumeId = self._image.mediaLabel # may be None, which is also valid for key in self._image.entries.keys(): image.addEntry(key, self._image.entries[key], override=False, contentsOnly=True) size = image.getEstimatedSize() logger.info("Image size will be %s.", displayBytes(size)) available = capacity.bytesAvailable logger.debug("Media capacity: %s", displayBytes(available)) if size > available: logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) raise IOError("Media does not contain enough capacity to store image.") try: (handle, path) = tempfile.mkstemp(dir=self._image.tmpdir) try: os.close(handle) except: pass image.writeImage(path) logger.debug("Completed creating image [%s].", path) return path except Exception, e: if path is not None and os.path.exists(path): try: os.unlink(path) except: pass raise e def _writeImage(self, imagePath, writeMulti, newDisc): """ Write an ISO image to disc using cdrecord. The disc is blanked first if C{newDisc} is C{True}. @param imagePath: Path to an ISO image on disk @param writeMulti: Indicates whether a multisession disc should be written, if possible. @param newDisc: Indicates whether the entire disc will overwritten. """ if newDisc: self._blankMedia() args = CdWriter._buildWriteArgs(self.hardwareId, imagePath, self._driveSpeed, writeMulti and self._deviceSupportsMulti) command = resolveCommand(CDRECORD_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing command to write disc." % result) self.refreshMedia() def _blankMedia(self): """ Blanks the media in the device, if the media is rewritable. @raise IOError: If the media could not be written to for some reason. """ if self.isRewritable(): args = CdWriter._buildBlankArgs(self.hardwareId) command = resolveCommand(CDRECORD_COMMAND) result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing command to blank disc." % result) self.refreshMedia() ####################################### # Methods used to parse command output ####################################### @staticmethod def _parsePropertiesOutput(output): """ Parses the output from a C{cdrecord} properties command. The C{output} parameter should be a list of strings as returned from C{executeCommand} for a C{cdrecord} command with arguments as from C{_buildPropertiesArgs}. The list of strings will be parsed to yield information about the properties of the device. The output is expected to be a huge long list of strings. Unfortunately, the strings aren't in a completely regular format. However, the format of individual lines seems to be regular enough that we can look for specific values. Two kinds of parsing take place: one kind of parsing picks out out specific values like the device id, device vendor, etc. The other kind of parsing just sets a boolean flag C{True} if a matching line is found. All of the parsing is done with regular expressions. Right now, pretty much nothing in the output is required and we should parse an empty document successfully (albeit resulting in a device that can't eject, doesn't have a tray and doesnt't support multisession discs). I had briefly considered erroring out if certain lines weren't found or couldn't be parsed, but that seems like a bad idea given that most of the information is just for reference. The results are returned as a tuple of the object device attributes: C{(deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject)}. @param output: Output from a C{cdrecord -prcap} command. @return: Results tuple as described above. @raise IOError: If there is problem parsing the output. """ deviceType = None deviceVendor = None deviceId = None deviceBufferSize = None deviceSupportsMulti = False deviceHasTray = False deviceCanEject = False typePattern = re.compile(r"(^Device type\s*:\s*)(.*)(\s*)(.*$)") vendorPattern = re.compile(r"(^Vendor_info\s*:\s*'\s*)(.*?)(\s*')(.*$)") idPattern = re.compile(r"(^Identifikation\s*:\s*'\s*)(.*?)(\s*')(.*$)") bufferPattern = re.compile(r"(^\s*Buffer size in KB:\s*)(.*?)(\s*$)") multiPattern = re.compile(r"^\s*Does read multi-session.*$") trayPattern = re.compile(r"^\s*Loading mechanism type: tray.*$") ejectPattern = re.compile(r"^\s*Does support ejection.*$") for line in output: if typePattern.search(line): deviceType = typePattern.search(line).group(2) logger.info("Device type is [%s].", deviceType) elif vendorPattern.search(line): deviceVendor = vendorPattern.search(line).group(2) logger.info("Device vendor is [%s].", deviceVendor) elif idPattern.search(line): deviceId = idPattern.search(line).group(2) logger.info("Device id is [%s].", deviceId) elif bufferPattern.search(line): try: sectors = int(bufferPattern.search(line).group(2)) deviceBufferSize = convertSize(sectors, UNIT_KBYTES, UNIT_BYTES) logger.info("Device buffer size is [%d] bytes.", deviceBufferSize) except TypeError: pass elif multiPattern.search(line): deviceSupportsMulti = True logger.info("Device does support multisession discs.") elif trayPattern.search(line): deviceHasTray = True logger.info("Device has a tray.") elif ejectPattern.search(line): deviceCanEject = True logger.info("Device can eject its media.") return (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) @staticmethod def _parseBoundariesOutput(output): """ Parses the output from a C{cdrecord} capacity command. The C{output} parameter should be a list of strings as returned from C{executeCommand} for a C{cdrecord} command with arguments as from C{_buildBoundaryArgs}. The list of strings will be parsed to yield information about the capacity of the media in the device. Basically, we expect the list of strings to include just one line, a pair of values. There isn't supposed to be whitespace, but we allow it anyway in the regular expression. Any lines below the one line we parse are completely ignored. It would be a good idea to ignore C{stderr} when executing the C{cdrecord} command that generates output for this method, because sometimes C{cdrecord} spits out kernel warnings about the actual output. The results are returned as a tuple of (lower, upper) as needed by the C{IsoImage} class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however. @note: If the boundaries output can't be parsed, we return C{None}. @param output: Output from a C{cdrecord -msinfo} command. @return: Boundaries tuple as described above. @raise IOError: If there is problem parsing the output. """ if len(output) < 1: logger.warn("Unable to read disc (might not be initialized); returning full capacity.") return None boundaryPattern = re.compile(r"(^\s*)([0-9]*)(\s*,\s*)([0-9]*)(\s*$)") parsed = boundaryPattern.search(output[0]) if not parsed: raise IOError("Unable to parse output of boundaries command.") try: boundaries = ( int(parsed.group(2)), int(parsed.group(4)) ) except TypeError: raise IOError("Unable to parse output of boundaries command.") return boundaries ################################# # Methods used to build commands ################################# @staticmethod def _buildOpenTrayArgs(device): """ Builds a list of arguments to be passed to a C{eject} command. The arguments will cause the C{eject} command to open the tray and eject the media. No validation is done by this method as to whether this action actually makes sense. @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append(device) return args @staticmethod def _buildUnlockTrayArgs(device): """ Builds a list of arguments to be passed to a C{eject} command. The arguments will cause the C{eject} command to unlock the tray. @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-i") args.append("off") args.append(device) return args @staticmethod def _buildCloseTrayArgs(device): """ Builds a list of arguments to be passed to a C{eject} command. The arguments will cause the C{eject} command to close the tray and reload the media. No validation is done by this method as to whether this action actually makes sense. @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-t") args.append(device) return args @staticmethod def _buildPropertiesArgs(hardwareId): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to ask the device for a list of its capacities via the C{-prcap} switch. @param hardwareId: Hardware id for the device (either SCSI id or device path) @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-prcap") args.append("dev=%s" % hardwareId) return args @staticmethod def _buildBoundariesArgs(hardwareId): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to ask the device for the current multisession boundaries of the media using the C{-msinfo} switch. @param hardwareId: Hardware id for the device (either SCSI id or device path) @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-msinfo") args.append("dev=%s" % hardwareId) return args @staticmethod def _buildBlankArgs(hardwareId, driveSpeed=None): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to blank the media in the device identified by C{hardwareId}. No validation is done by this method as to whether the action makes sense (i.e. to whether the media even can be blanked). @param hardwareId: Hardware id for the device (either SCSI id or device path) @param driveSpeed: Speed at which the drive writes. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-v") args.append("blank=fast") if driveSpeed is not None: args.append("speed=%d" % driveSpeed) args.append("dev=%s" % hardwareId) return args @staticmethod def _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True): """ Builds a list of arguments to be passed to a C{cdrecord} command. The arguments will cause the C{cdrecord} command to write the indicated ISO image (C{imagePath}) to the media in the device identified by C{hardwareId}. The C{writeMulti} argument controls whether to write a multisession disc. No validation is done by this method as to whether the action makes sense (i.e. to whether the device even can write multisession discs, for instance). @param hardwareId: Hardware id for the device (either SCSI id or device path) @param imagePath: Path to an ISO image on disk. @param driveSpeed: Speed at which the drive writes. @param writeMulti: Indicates whether to write a multisession disc. @return: List suitable for passing to L{util.executeCommand} as C{args}. """ args = [] args.append("-v") if driveSpeed is not None: args.append("speed=%d" % driveSpeed) args.append("dev=%s" % hardwareId) if writeMulti: args.append("-multi") args.append("-data") args.append(imagePath) return args CedarBackup2-2.26.5/CedarBackup2/writers/__init__.py0000664000175000017500000000243312560016766023627 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Cedar Backup writers. This package consolidates all of the modules that implenent "image writer" functionality, including utilities and specific writer implementations. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2.writers import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'util', 'cdwriter', 'dvdwriter', ] CedarBackup2-2.26.5/CedarBackup2/writers/dvdwriter.py0000664000175000017500000011774512642023612024104 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides functionality related to DVD writer devices. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides functionality related to DVD writer devices. @sort: MediaDefinition, DvdWriter, MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW @var MEDIA_DVDPLUSR: Constant representing DVD+R media. @var MEDIA_DVDPLUSRW: Constant representing DVD+RW media. @author: Kenneth J. Pronovici @author: Dmitry Rutsky """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging import tempfile import time # Cedar Backup modules from CedarBackup2.writers.util import IsoImage from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import convertSize, displayBytes, encodePath from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES, UNIT_GBYTES from CedarBackup2.writers.util import validateDevice, validateDriveSpeed ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.writers.dvdwriter") MEDIA_DVDPLUSR = 1 MEDIA_DVDPLUSRW = 2 GROWISOFS_COMMAND = [ "growisofs", ] EJECT_COMMAND = [ "eject", ] ######################################################################## # MediaDefinition class definition ######################################################################## class MediaDefinition(object): """ Class encapsulating information about DVD media definitions. The following media types are accepted: - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) Note that the capacity attribute returns capacity in terms of ISO sectors (C{util.ISO_SECTOR_SIZE)}. This is for compatibility with the CD writer functionality. The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte. @sort: __init__, mediaType, rewritable, capacity """ def __init__(self, mediaType): """ Creates a media definition for the indicated media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ self._mediaType = None self._rewritable = False self._capacity = 0.0 self._setValues(mediaType) def _setValues(self, mediaType): """ Sets values based on media type. @param mediaType: Type of the media, as discussed above. @raise ValueError: If the media type is unknown or unsupported. """ if mediaType not in [MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW, ]: raise ValueError("Invalid media type %d." % mediaType) self._mediaType = mediaType if self._mediaType == MEDIA_DVDPLUSR: self._rewritable = False self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB elif self._mediaType == MEDIA_DVDPLUSRW: self._rewritable = True self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB def _getMediaType(self): """ Property target used to get the media type value. """ return self._mediaType def _getRewritable(self): """ Property target used to get the rewritable flag value. """ return self._rewritable def _getCapacity(self): """ Property target used to get the capacity value. """ return self._capacity mediaType = property(_getMediaType, None, None, doc="Configured media type.") rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") capacity = property(_getCapacity, None, None, doc="Total capacity of media in 2048-byte sectors.") ######################################################################## # MediaCapacity class definition ######################################################################## class MediaCapacity(object): """ Class encapsulating information about DVD media capacity. Space used and space available do not include any information about media lead-in or other overhead. @sort: __init__, bytesUsed, bytesAvailable, totalCapacity, utilized """ def __init__(self, bytesUsed, bytesAvailable): """ Initializes a capacity object. @raise ValueError: If the bytes used and available values are not floats. """ self._bytesUsed = float(bytesUsed) self._bytesAvailable = float(bytesAvailable) def __str__(self): """ Informal string representation for class instance. """ return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized) def _getBytesUsed(self): """ Property target used to get the bytes-used value. """ return self._bytesUsed def _getBytesAvailable(self): """ Property target available to get the bytes-available value. """ return self._bytesAvailable def _getTotalCapacity(self): """ Property target to get the total capacity (used + available). """ return self.bytesUsed + self.bytesAvailable def _getUtilized(self): """ Property target to get the percent of capacity which is utilized. """ if self.bytesAvailable <= 0.0: return 100.0 elif self.bytesUsed <= 0.0: return 0.0 return (self.bytesUsed / self.totalCapacity) * 100.0 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.") ######################################################################## # _ImageProperties class definition ######################################################################## class _ImageProperties(object): """ Simple value object to hold image properties for C{DvdWriter}. """ def __init__(self): self.newDisc = False self.tmpdir = None self.mediaLabel = None self.entries = None # dict mapping path to graft point ######################################################################## # DvdWriter class definition ######################################################################## class DvdWriter(object): ###################### # Class documentation ###################### """ Class representing a device that knows how to write some kinds of DVD media. Summary ======= This is a class representing a device that knows how to write some kinds of DVD media. It provides common operations for the device, such as ejecting the media and writing data to the media. This class is implemented in terms of the C{eject} and C{growisofs} utilities, all of which should be available on most UN*X platforms. Image Writer Interface ====================== The following methods make up the "image writer" interface shared with other kinds of writers:: __init__ initializeImage() addImageEntry() writeImage() setImageNewDisc() retrieveCapacity() getEstimatedImageSize() Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer. The media attribute is also assumed to be available. Unlike the C{CdWriter}, the C{DvdWriter} can only operate in terms of filesystem devices, not SCSI devices. So, although the constructor interface accepts a SCSI device parameter for the sake of compatibility, it's not used. Media Types =========== This class knows how to write to DVD+R and DVD+RW media, represented by the following constants: - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) The difference is that DVD+RW media can be rewritten, while DVD+R media cannot be (although at present, C{DvdWriter} does not really differentiate between rewritable and non-rewritable media). The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte. The underlying C{growisofs} utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type. Device Attributes vs. Media Attributes ====================================== As with the cdwriter functionality, a given dvdwriter instance has two different kinds of attributes associated with it. I call these device attributes and media attributes. Device attributes are things which can be determined without looking at the media. Media attributes are attributes which vary depending on the state of the media. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls. Compared to cdwriters, dvdwriters have very few attributes. This is due to differences between the way C{growisofs} works relative to C{cdrecord}. Media Capacity ============== One major difference between the C{cdrecord}/C{mkisofs} utilities used by the cdwriter class and the C{growisofs} utility used here is that the process of estimating remaining capacity and image size is more straightforward with C{cdrecord}/C{mkisofs} than with C{growisofs}. In this class, remaining capacity is calculated by asking doing a dry run of C{growisofs} and grabbing some information from the output of that command. Image size is estimated by asking the C{IsoImage} class for an estimate and then adding on a "fudge factor" determined through experimentation. Testing ======= It's rather difficult to test this code in an automated fashion, even if you have access to a physical DVD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, some of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the "difficult" functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all. @sort: __init__, isRewritable, retrieveCapacity, openTray, closeTray, refreshMedia, initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize, _writeImage, _getEstimatedImageSize, _searchForOverburn, _buildWriteArgs, device, scsiId, hardwareId, driveSpeed, media, deviceHasTray, deviceCanEject """ ############## # Constructor ############## def __init__(self, device, scsiId=None, driveSpeed=None, mediaType=MEDIA_DVDPLUSRW, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False): """ Initializes a DVD writer object. Since C{growisofs} can only address devices using the device path (i.e. C{/dev/dvd}), the hardware id will always be set based on the device. If passed in, it will be saved for reference purposes only. We have no way to query the device to ask whether it has a tray or can be safely opened and closed. So, the C{noEject} flag is used to set these values. If C{noEject=False}, then we assume a tray exists and open/close is safe. If C{noEject=True}, then we assume that there is no tray and open/close is not safe. @note: The C{unittest} parameter should never be set to C{True} outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose. @param device: Filesystem device associated with this writer. @type device: Absolute path to a filesystem device, i.e. C{/dev/dvd} @param scsiId: SCSI id for the device (optional, for reference only). @type scsiId: If provided, SCSI id in the form C{[:]scsibus,target,lun} @param driveSpeed: Speed at which the drive writes. @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. @param mediaType: Type of the media that is assumed to be in the drive. @type mediaType: One of the valid media type as discussed above. @param noEject: Tells Cedar Backup that the device cannot safely be ejected @type noEject: Boolean true/false @param refreshMediaDelay: Refresh media delay to use, if any @type refreshMediaDelay: Number of seconds, an integer >= 0 @param ejectDelay: Eject delay to use, if any @type ejectDelay: Number of seconds, an integer >= 0 @param unittest: Turns off certain validations, for use in unit testing. @type unittest: Boolean true/false @raise ValueError: If the device is not valid for some reason. @raise ValueError: If the SCSI id is not in a valid form. @raise ValueError: If the drive speed is not an integer >= 1. """ if scsiId is not None: logger.warn("SCSI id [%s] will be ignored by DvdWriter.", scsiId) self._image = None # optionally filled in by initializeImage() self._device = validateDevice(device, unittest) self._scsiId = scsiId # not validated, because it's just for reference self._driveSpeed = validateDriveSpeed(driveSpeed) self._media = MediaDefinition(mediaType) self._refreshMediaDelay = refreshMediaDelay self._ejectDelay = ejectDelay if noEject: self._deviceHasTray = False self._deviceCanEject = False else: self._deviceHasTray = True # just assume self._deviceCanEject = True # just assume ############# # Properties ############# def _getDevice(self): """ Property target used to get the device value. """ return self._device def _getScsiId(self): """ Property target used to get the SCSI id value. """ return self._scsiId def _getHardwareId(self): """ Property target used to get the hardware id value. """ return self._device def _getDriveSpeed(self): """ Property target used to get the drive speed. """ return self._driveSpeed def _getMedia(self): """ Property target used to get the media description. """ return self._media def _getDeviceHasTray(self): """ Property target used to get the device-has-tray flag. """ return self._deviceHasTray def _getDeviceCanEject(self): """ Property target used to get the device-can-eject flag. """ return self._deviceCanEject def _getRefreshMediaDelay(self): """ Property target used to get the configured refresh media delay, in seconds. """ return self._refreshMediaDelay def _getEjectDelay(self): """ Property target used to get the configured eject delay, in seconds. """ return self._ejectDelay device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") scsiId = property(_getScsiId, None, None, doc="SCSI id for the device (saved for reference only).") hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer (always the device path).") driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") ################################################# # Methods related to device and media attributes ################################################# def isRewritable(self): """Indicates whether the media is rewritable per configuration.""" return self._media.rewritable def retrieveCapacity(self, entireDisc=False): """ Retrieves capacity for the current media in terms of a C{MediaCapacity} object. If C{entireDisc} is passed in as C{True}, the capacity will be for the entire disc, as if it were to be rewritten from scratch. The same will happen if the disc can't be read for some reason. Otherwise, the capacity will be calculated by subtracting the sectors currently used on the disc, as reported by C{growisofs} itself. @param entireDisc: Indicates whether to return capacity for entire disc. @type entireDisc: Boolean true/false @return: C{MediaCapacity} object describing the capacity of the media. @raise ValueError: If there is a problem parsing the C{growisofs} output @raise IOError: If the media could not be read for some reason. """ sectorsUsed = 0.0 if not entireDisc: sectorsUsed = self._retrieveSectorsUsed() sectorsAvailable = self._media.capacity - sectorsUsed # both are in sectors bytesUsed = convertSize(sectorsUsed, UNIT_SECTORS, UNIT_BYTES) bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) return MediaCapacity(bytesUsed, bytesAvailable) ####################################################### # Methods used for working with the internal ISO image ####################################################### def initializeImage(self, newDisc, tmpdir, mediaLabel=None): """ Initializes the writer's associated ISO image. This method initializes the C{image} instance variable so that the caller can use the C{addImageEntry} method. Once entries have been added, the C{writeImage} method can be called with no arguments. @param newDisc: Indicates whether the disc should be re-initialized @type newDisc: Boolean true/false @param tmpdir: Temporary directory to use if needed @type tmpdir: String representing a directory path on disk @param mediaLabel: Media label to be applied to the image, if any @type mediaLabel: String, no more than 25 characters long """ self._image = _ImageProperties() self._image.newDisc = newDisc self._image.tmpdir = encodePath(tmpdir) self._image.mediaLabel = mediaLabel self._image.entries = {} # mapping from path to graft point (if any) def addImageEntry(self, path, graftPoint): """ Adds a filepath entry to the writer's associated ISO image. The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass C{None}. @note: Before calling this method, you must call L{initializeImage}. @param path: File or directory to be added to the image @type path: String representing a path on disk @param graftPoint: Graft point to be used when adding this entry @type graftPoint: String representing a graft point path, as described above @raise ValueError: If initializeImage() was not previously called @raise ValueError: If the path is not a valid file or directory """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") if not os.path.exists(path): raise ValueError("Path [%s] does not exist." % path) self._image.entries[path] = graftPoint def setImageNewDisc(self, newDisc): """ Resets (overrides) the newDisc flag on the internal image. @param newDisc: New disc flag to set @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") self._image.newDisc = newDisc def getEstimatedImageSize(self): """ Gets the estimated size of the image associated with the writer. This is an estimate and is conservative. The actual image could be as much as 450 blocks (sectors) smaller under some circmstances. @return: Estimated size of the image, in bytes. @raise IOError: If there is a problem calling C{mkisofs}. @raise ValueError: If initializeImage() was not previously called """ if self._image is None: raise ValueError("Must call initializeImage() before using this method.") return DvdWriter._getEstimatedImageSize(self._image.entries) ###################################### # Methods which expose device actions ###################################### def openTray(self): """ Opens the device's tray and leaves it open. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag. Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy. @raise IOError: If there is an error talking to the device. """ if self._deviceHasTray and self._deviceCanEject: command = resolveCommand(EJECT_COMMAND) args = [ self.device, ] result = executeCommand(command, args)[0] if result != 0: logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") self.unlockTray() result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) logger.debug("Kludge was apparently successful.") if self.ejectDelay is not None: logger.debug("Per configuration, sleeping %d seconds after opening tray.", self.ejectDelay) time.sleep(self.ejectDelay) def unlockTray(self): """ Unlocks the device's tray via 'eject -i off'. @raise IOError: If there is an error talking to the device. """ command = resolveCommand(EJECT_COMMAND) args = [ "-i", "off", self.device, ] result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to unlock tray." % result) def closeTray(self): """ Closes the device's tray. This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. @raise IOError: If there is an error talking to the device. """ if self._deviceHasTray and self._deviceCanEject: command = resolveCommand(EJECT_COMMAND) args = [ "-t", self.device, ] result = executeCommand(command, args)[0] if result != 0: raise IOError("Error (%d) executing eject command to close tray." % result) def refreshMedia(self): """ Opens and then immediately closes the device's tray, to refresh the device's idea of the media. Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.) This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though. @raise IOError: If there is an error talking to the device. """ self.openTray() self.closeTray() self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! if self.refreshMediaDelay is not None: logger.debug("Per configuration, sleeping %d seconds to stabilize media state.", self.refreshMediaDelay) time.sleep(self.refreshMediaDelay) logger.debug("Media refresh complete; hopefully media state is stable now.") def writeImage(self, imagePath=None, newDisc=False, writeMulti=True): """ Writes an ISO image to the media in the device. If C{newDisc} is passed in as C{True}, we assume that the entire disc will be re-created from scratch. Note that unlike C{CdWriter}, C{DvdWriter} does not blank rewritable media before reusing it; however, C{growisofs} is called such that the media will be re-initialized as needed. If C{imagePath} is passed in as C{None}, then the existing image configured with C{initializeImage()} will be used. Under these circumstances, the passed-in C{newDisc} flag will be ignored and the value passed in to C{initializeImage()} will apply instead. The C{writeMulti} argument is ignored. It exists for compatibility with the Cedar Backup image writer interface. @note: The image size indicated in the log ("Image size will be...") is an estimate. The estimate is conservative and is probably larger than the actual space that C{dvdwriter} will use. @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image @type imagePath: String representing a path on disk @param newDisc: Indicates whether the disc should be re-initialized @type newDisc: Boolean true/false. @param writeMulti: Unused @type writeMulti: Boolean true/false @raise ValueError: If the image path is not absolute. @raise ValueError: If some path cannot be encoded properly. @raise IOError: If the media could not be written to for some reason. @raise ValueError: If no image is passed in and initializeImage() was not previously called """ if not writeMulti: logger.warn("writeMulti value of [%s] ignored.", writeMulti) if imagePath is None: if self._image is None: raise ValueError("Must call initializeImage() before using this method with no image path.") size = self.getEstimatedImageSize() logger.info("Image size will be %s (estimated).", displayBytes(size)) available = self.retrieveCapacity(entireDisc=self._image.newDisc).bytesAvailable if size > available: logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) raise IOError("Media does not contain enough capacity to store image.") self._writeImage(self._image.newDisc, None, self._image.entries, self._image.mediaLabel) else: if not os.path.isabs(imagePath): raise ValueError("Image path must be absolute.") imagePath = encodePath(imagePath) self._writeImage(newDisc, imagePath, None) ################################################################## # Utility methods for dealing with growisofs and dvd+rw-mediainfo ################################################################## def _writeImage(self, newDisc, imagePath, entries, mediaLabel=None): """ Writes an image to disc using either an entries list or an ISO image on disk. Callers are assumed to have done validation on paths, etc. before calling this method. @param newDisc: Indicates whether the disc should be re-initialized @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} @raise IOError: If the media could not be written to for some reason. """ command = resolveCommand(GROWISOFS_COMMAND) args = DvdWriter._buildWriteArgs(newDisc, self.hardwareId, self._driveSpeed, imagePath, entries, mediaLabel, dryRun=False) (result, output) = executeCommand(command, args, returnOutput=True) if result != 0: DvdWriter._searchForOverburn(output) # throws own exception if overburn condition is found raise IOError("Error (%d) executing command to write disc." % result) self.refreshMedia() @staticmethod def _getEstimatedImageSize(entries): """ Gets the estimated size of a set of image entries. This is implemented in terms of the C{IsoImage} class. The returned value is calculated by adding a "fudge factor" to the value from C{IsoImage}. This fudge factor was determined by experimentation and is conservative -- the actual image could be as much as 450 blocks smaller under some circumstances. @param entries: Dictionary mapping path to graft point. @return: Total estimated size of image, in bytes. @raise ValueError: If there are no entries in the dictionary @raise ValueError: If any path in the dictionary does not exist @raise IOError: If there is a problem calling C{mkisofs}. """ fudgeFactor = convertSize(2500.0, UNIT_SECTORS, UNIT_BYTES) # determined through experimentation if len(entries.keys()) == 0: raise ValueError("Must add at least one entry with addImageEntry().") image = IsoImage() for path in entries.keys(): image.addEntry(path, entries[path], override=False, contentsOnly=True) estimatedSize = image.getEstimatedSize() + fudgeFactor return estimatedSize def _retrieveSectorsUsed(self): """ Retrieves the number of sectors used on the current media. This is a little ugly. We need to call growisofs in "dry-run" mode and parse some information from its output. However, to do that, we need to create a dummy file that we can pass to the command -- and we have to make sure to remove it later. Once growisofs has been run, then we call C{_parseSectorsUsed} to parse the output and calculate the number of sectors used on the media. @return: Number of sectors used on the media """ tempdir = tempfile.mkdtemp() try: entries = { tempdir: None } args = DvdWriter._buildWriteArgs(False, self.hardwareId, self.driveSpeed, None, entries, None, dryRun=True) command = resolveCommand(GROWISOFS_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True) if result != 0: logger.debug("Error (%d) calling growisofs to read sectors used.", result) logger.warn("Unable to read disc (might not be initialized); returning zero sectors used.") return 0.0 sectorsUsed = DvdWriter._parseSectorsUsed(output) logger.debug("Determined sectors used as %s", sectorsUsed) return sectorsUsed finally: if os.path.exists(tempdir): try: os.rmdir(tempdir) except: pass @staticmethod def _parseSectorsUsed(output): """ Parse sectors used information out of C{growisofs} output. The first line of a growisofs run looks something like this:: Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566' Dmitry has determined that the seek value in this line gives us information about how much data has previously been written to the media. That value multiplied by 16 yields the number of sectors used. If the seek line cannot be found in the output, then sectors used of zero is assumed. @return: Sectors used on the media, as a floating point number. @raise ValueError: If the output cannot be parsed properly. """ if output is not None: pattern = re.compile(r"(^)(.*)(seek=)(.*)('$)") for line in output: match = pattern.search(line) if match is not None: try: return float(match.group(4).strip()) * 16.0 except ValueError: raise ValueError("Unable to parse sectors used out of growisofs output.") logger.warn("Unable to read disc (might not be initialized); returning zero sectors used.") return 0.0 @staticmethod def _searchForOverburn(output): """ Search for an "overburn" error message in C{growisofs} output. The C{growisofs} command returns a non-zero exit code and puts a message into the output -- even on a dry run -- if there is not enough space on the media. This is called an "overburn" condition. The error message looks like this:: :-( /dev/cdrom: 894048 blocks are free, 2033746 to be written! This method looks for the overburn error message anywhere in the output. If a matching error message is found, an C{IOError} exception is raised containing relevant information about the problem. Otherwise, the method call returns normally. @param output: List of output lines to search, as from C{executeCommand} @raise IOError: If an overburn condition is found. """ if output is None: return pattern = re.compile(r"(^)(:-[(])(\s*.*:\s*)(.* )(blocks are free, )(.* )(to be written!)") for line in output: match = pattern.search(line) if match is not None: try: available = convertSize(float(match.group(4).strip()), UNIT_SECTORS, UNIT_BYTES) size = convertSize(float(match.group(6).strip()), UNIT_SECTORS, UNIT_BYTES) logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) except ValueError: logger.error("Image does not fit in available capacity (no useful capacity info available).") raise IOError("Media does not contain enough capacity to store image.") @staticmethod def _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False): """ Builds a list of arguments to be passed to a C{growisofs} command. The arguments will either cause C{growisofs} to write the indicated image file to disc, or will pass C{growisofs} a list of directories or files that should be written to disc. If a new image is created, it will always be created with Rock Ridge extensions (-r). A volume name will be applied (-V) if C{mediaLabel} is not C{None}. @param newDisc: Indicates whether the disc should be re-initialized @param hardwareId: Hardware id for the device @param driveSpeed: Speed at which the drive writes. @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} @param mediaLabel: Media label to set on the image, if any @param dryRun: Says whether to make this a dry run (for checking capacity) @note: If we write an existing image to disc, then the mediaLabel is ignored. The media label is an attribute of the image, and should be set on the image when it is created. @note: We always pass the undocumented option C{-use-the-force-like=tty} to growisofs. Without this option, growisofs will refuse to execute certain actions when running from cron. A good example is -Z, which happily overwrites an existing DVD from the command-line, but fails when run from cron. It took a while to figure that out, since it worked every time I tested it by hand. :( @return: List suitable for passing to L{util.executeCommand} as C{args}. @raise ValueError: If caller does not pass one or the other of imagePath or entries. """ args = [] if (imagePath is None and entries is None) or (imagePath is not None and entries is not None): raise ValueError("Must use either imagePath or entries.") args.append("-use-the-force-luke=tty") # tell growisofs to let us run from cron if dryRun: args.append("-dry-run") if driveSpeed is not None: args.append("-speed=%d" % driveSpeed) if newDisc: args.append("-Z") else: args.append("-M") if imagePath is not None: args.append("%s=%s" % (hardwareId, imagePath)) else: args.append(hardwareId) if mediaLabel is not None: args.append("-V") args.append(mediaLabel) args.append("-r") # Rock Ridge extensions with sane ownership and permissions args.append("-graft-points") keys = entries.keys() keys.sort() # just so we get consistent results for key in keys: # Same syntax as when calling mkisofs in IsoImage if entries[key] is None: args.append(key) else: args.append("%s/=%s" % (entries[key].strip("/"), key)) return args CedarBackup2-2.26.5/CedarBackup2/release.py0000664000175000017500000000224112642035036021776 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides location to maintain release information. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Provides location to maintain version information. @sort: AUTHOR, EMAIL, COPYRIGHT, VERSION, DATE, URL @var AUTHOR: Author of software. @var EMAIL: Email address of author. @var COPYRIGHT: Copyright date. @var VERSION: Software version. @var DATE: Software release date. @var URL: URL of Cedar Backup webpage. @author: Kenneth J. Pronovici """ AUTHOR = "Kenneth J. Pronovici" EMAIL = "pronovic@ieee.org" COPYRIGHT = "2004-2011,2013-2016" VERSION = "2.26.5" DATE = "02 Jan 2016" URL = "https://bitbucket.org/cedarsolutions/cedar-backup2" CedarBackup2-2.26.5/CedarBackup2/knapsack.py0000664000175000017500000003203512560016766022165 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides knapsack algorithms used for "fit" decisions # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######## # Notes ######## """ Provides the implementation for various knapsack algorithms. Knapsack algorithms are "fit" algorithms, used to take a set of "things" and decide on the optimal way to fit them into some container. The focus of this code is to fit files onto a disc, although the interface (in terms of item, item size and capacity size, with no units) is generic enough that it can be applied to items other than files. All of the algorithms implemented below assume that "optimal" means "use up as much of the disc's capacity as possible", but each produces slightly different results. For instance, the best fit and first fit algorithms tend to include fewer files than the worst fit and alternate fit algorithms, even if they use the disc space more efficiently. Usually, for a given set of circumstances, it will be obvious to a human which algorithm is the right one to use, based on trade-offs between number of files included and ideal space utilization. It's a little more difficult to do this programmatically. For Cedar Backup's purposes (i.e. trying to fit a small number of collect-directory tarfiles onto a disc), worst-fit is probably the best choice if the goal is to include as many of the collect directories as possible. @sort: firstFit, bestFit, worstFit, alternateFit @author: Kenneth J. Pronovici """ ####################################################################### # Public functions ####################################################################### ###################### # firstFit() function ###################### def firstFit(items, capacity): """ Implements the first-fit knapsack algorithm. The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Search the list as it stands (arbitrary order) used = 0 remaining = capacity for key in items.keys(): if remaining == 0: break if remaining - items[key][1] >= 0: included[key] = None used += items[key][1] remaining -= items[key][1] # Return results return (included.keys(), used) ##################### # bestFit() function ##################### def bestFit(items, capacity): """ Implements the best-fit knapsack algorithm. The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not ususual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Sort the list from largest to smallest itemlist = items.items() itemlist.sort(lambda x, y: cmp(y[1][1], x[1][1])) # sort descending keys = [] for item in itemlist: keys.append(item[0]) # Search the list used = 0 remaining = capacity for key in keys: if remaining == 0: break if remaining - items[key][1] >= 0: included[key] = None used += items[key][1] remaining -= items[key][1] # Return the results return (included.keys(), used) ###################### # worstFit() function ###################### def worstFit(items, capacity): """ Implements the worst-fit knapsack algorithm. The worst-fit algorithm proceeds through an a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Sort the list from smallest to largest itemlist = items.items() itemlist.sort(lambda x, y: cmp(x[1][1], y[1][1])) # sort ascending keys = [] for item in itemlist: keys.append(item[0]) # Search the list used = 0 remaining = capacity for key in keys: if remaining == 0: break if remaining - items[key][1] >= 0: included[key] = None used += items[key][1] remaining -= items[key][1] # Return results return (included.keys(), used) ########################## # alternateFit() function ########################## def alternateFit(items, capacity): """ Implements the alternate-fit knapsack algorithm. This algorithm (which I'm calling "alternate-fit" as in "alternate from one to the other") tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slighly fewer items. The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items. The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed. The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass C{items.copy()} if they do not want their version of the list modified. The function returns a list of chosen items and the unitless amount of capacity used by the items. @param items: Items to operate on @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer @param capacity: Capacity of container to fit to @type capacity: integer @returns: Tuple C{(items, used)} as described above """ # Use dict since insert into dict is faster than list append included = { } # Sort the list from smallest to largest itemlist = items.items() itemlist.sort(lambda x, y: cmp(x[1][1], y[1][1])) # sort ascending keys = [] for item in itemlist: keys.append(item[0]) # Search the list used = 0 remaining = capacity front = keys[0:len(keys)/2] back = keys[len(keys)/2:len(keys)] back.reverse() i = 0 j = 0 while remaining > 0 and (i < len(front) or j < len(back)): if i < len(front): if remaining - items[front[i]][1] >= 0: included[front[i]] = None used += items[front[i]][1] remaining -= items[front[i]][1] i += 1 if j < len(back): if remaining - items[back[j]][1] >= 0: included[back[j]] = None used += items[back[j]][1] remaining -= items[back[j]][1] j += 1 # Return results return (included.keys(), used) CedarBackup2-2.26.5/CedarBackup2/util.py0000664000175000017500000022100212642021122021320 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # Portions copyright (c) 2001, 2002 Python Software Foundation. # All Rights Reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides general-purpose utilities. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides general-purpose utilities. @sort: AbsolutePathList, ObjectTypeList, RestrictedContentList, RegexMatchList, RegexList, _Vertex, DirectedGraph, PathResolverSingleton, sortDict, convertSize, getUidGid, changeOwnership, splitCommandLine, resolveCommand, executeCommand, calculateFileAge, encodePath, nullDevice, deriveDayOfWeek, isStartOfWeek, buildNormalizedPath, ISO_SECTOR_SIZE, BYTES_PER_SECTOR, BYTES_PER_KBYTE, BYTES_PER_MBYTE, BYTES_PER_GBYTE, KBYTES_PER_MBYTE, MBYTES_PER_GBYTE, SECONDS_PER_MINUTE, MINUTES_PER_HOUR, HOURS_PER_DAY, SECONDS_PER_DAY, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, UNIT_SECTORS @var ISO_SECTOR_SIZE: Size of an ISO image sector, in bytes. @var BYTES_PER_SECTOR: Number of bytes (B) per ISO sector. @var BYTES_PER_KBYTE: Number of bytes (B) per kilobyte (kB). @var BYTES_PER_MBYTE: Number of bytes (B) per megabyte (MB). @var BYTES_PER_GBYTE: Number of bytes (B) per megabyte (GB). @var KBYTES_PER_MBYTE: Number of kilobytes (kB) per megabyte (MB). @var MBYTES_PER_GBYTE: Number of megabytes (MB) per gigabyte (GB). @var SECONDS_PER_MINUTE: Number of seconds per minute. @var MINUTES_PER_HOUR: Number of minutes per hour. @var HOURS_PER_DAY: Number of hours per day. @var SECONDS_PER_DAY: Number of seconds per day. @var UNIT_BYTES: Constant representing the byte (B) unit for conversion. @var UNIT_KBYTES: Constant representing the kilobyte (kB) unit for conversion. @var UNIT_MBYTES: Constant representing the megabyte (MB) unit for conversion. @var UNIT_GBYTES: Constant representing the gigabyte (GB) unit for conversion. @var UNIT_SECTORS: Constant representing the ISO sector unit for conversion. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## import sys import math import os import re import time import logging import string # pylint: disable=W0402 from subprocess import Popen, STDOUT, PIPE try: import pwd import grp _UID_GID_AVAILABLE = True except ImportError: _UID_GID_AVAILABLE = False from CedarBackup2.release import VERSION, DATE ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.util") outputLogger = logging.getLogger("CedarBackup2.output") ISO_SECTOR_SIZE = 2048.0 # in bytes BYTES_PER_SECTOR = ISO_SECTOR_SIZE BYTES_PER_KBYTE = 1024.0 KBYTES_PER_MBYTE = 1024.0 MBYTES_PER_GBYTE = 1024.0 BYTES_PER_MBYTE = BYTES_PER_KBYTE * KBYTES_PER_MBYTE BYTES_PER_GBYTE = BYTES_PER_MBYTE * MBYTES_PER_GBYTE SECONDS_PER_MINUTE = 60.0 MINUTES_PER_HOUR = 60.0 HOURS_PER_DAY = 24.0 SECONDS_PER_DAY = SECONDS_PER_MINUTE * MINUTES_PER_HOUR * HOURS_PER_DAY UNIT_BYTES = 0 UNIT_KBYTES = 1 UNIT_MBYTES = 2 UNIT_GBYTES = 4 UNIT_SECTORS = 3 MTAB_FILE = "/etc/mtab" MOUNT_COMMAND = [ "mount", ] UMOUNT_COMMAND = [ "umount", ] DEFAULT_LANGUAGE = "C" LANG_VAR = "LANG" LOCALE_VARS = [ "LC_ADDRESS", "LC_ALL", "LC_COLLATE", "LC_CTYPE", "LC_IDENTIFICATION", "LC_MEASUREMENT", "LC_MESSAGES", "LC_MONETARY", "LC_NAME", "LC_NUMERIC", "LC_PAPER", "LC_TELEPHONE", "LC_TIME", ] ######################################################################## # UnorderedList class definition ######################################################################## class UnorderedList(list): """ Class representing an "unordered list". An "unordered list" is a list in which only the contents matter, not the order in which the contents appear in the list. For instance, we might be keeping track of set of paths in a list, because it's convenient to have them in that form. However, for comparison purposes, we would only care that the lists contain exactly the same contents, regardless of order. I have come up with two reasonable ways of doing this, plus a couple more that would work but would be a pain to implement. My first method is to copy and sort each list, comparing the sorted versions. This will only work if two lists with exactly the same members are guaranteed to sort in exactly the same order. The second way would be to create two Sets and then compare the sets. However, this would lose information about any duplicates in either list. I've decided to go with option #1 for now. I'll modify this code if I run into problems in the future. We override the original C{__eq__}, C{__ne__}, C{__ge__}, C{__gt__}, C{__le__} and C{__lt__} list methods to change the definition of the various comparison operators. In all cases, the comparison is changed to return the result of the original operation I{but instead comparing sorted lists}. This is going to be quite a bit slower than a normal list, so you probably only want to use it on small lists. """ def __eq__(self, other): """ Definition of C{==} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self == other}. """ if other is None: return False selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__eq__(otherSorted) def __ne__(self, other): """ Definition of C{!=} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self != other}. """ if other is None: return True selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__ne__(otherSorted) def __ge__(self, other): """ Definition of S{>=} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self >= other}. """ if other is None: return True selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__ge__(otherSorted) def __gt__(self, other): """ Definition of C{>} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self > other}. """ if other is None: return True selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__gt__(otherSorted) def __le__(self, other): """ Definition of S{<=} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self <= other}. """ if other is None: return False selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__le__(otherSorted) def __lt__(self, other): """ Definition of C{<} operator for this class. @param other: Other object to compare to. @return: True/false depending on whether C{self < other}. """ if other is None: return False selfSorted = self[:] otherSorted = other[:] selfSorted.sort() otherSorted.sort() return selfSorted.__lt__(otherSorted) ######################################################################## # AbsolutePathList class definition ######################################################################## class AbsolutePathList(UnorderedList): """ Class representing a list of absolute paths. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list is an absolute path. Each item added to the list is encoded using L{encodePath}. If we don't do this, we have problems trying certain operations between strings and unicode objects, particularly for "odd" filenames that can't be encoded in standard ASCII. """ def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is not an absolute path. """ if not os.path.isabs(item): raise ValueError("Not an absolute path: [%s]" % item) list.append(self, encodePath(item)) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is not an absolute path. """ if not os.path.isabs(item): raise ValueError("Not an absolute path: [%s]" % item) list.insert(self, index, encodePath(item)) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If any item is not an absolute path. """ for item in seq: if not os.path.isabs(item): raise ValueError("Not an absolute path: [%s]" % item) for item in seq: list.append(self, encodePath(item)) ######################################################################## # ObjectTypeList class definition ######################################################################## class ObjectTypeList(UnorderedList): """ Class representing a list containing only objects with a certain type. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list matches the type that is requested. The comparison uses the built-in C{isinstance}, which should allow subclasses of of the requested type to be added to the list as well. The C{objectName} value will be used in exceptions, i.e. C{"Item must be a CollectDir object."} if C{objectName} is C{"CollectDir"}. """ def __init__(self, objectType, objectName): """ Initializes a typed list for a particular type. @param objectType: Type that the list elements must match. @param objectName: Short string containing the "name" of the type. """ super(ObjectTypeList, self).__init__() self.objectType = objectType self.objectName = objectName def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item does not match requested type. """ if not isinstance(item, self.objectType): raise ValueError("Item must be a %s object." % self.objectName) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item does not match requested type. """ if not isinstance(item, self.objectType): raise ValueError("Item must be a %s object." % self.objectName) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If item does not match requested type. """ for item in seq: if not isinstance(item, self.objectType): raise ValueError("All items must be %s objects." % self.objectName) list.extend(self, seq) ######################################################################## # RestrictedContentList class definition ######################################################################## class RestrictedContentList(UnorderedList): """ Class representing a list containing only object with certain values. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list is among the valid values. We use a standard comparison, so pretty much anything can be in the list of valid values. The C{valuesDescr} value will be used in exceptions, i.e. C{"Item must be one of values in VALID_ACTIONS"} if C{valuesDescr} is C{"VALID_ACTIONS"}. @note: This class doesn't make any attempt to trap for nonsensical arguments. All of the values in the values list should be of the same type (i.e. strings). Then, all list operations also need to be of that type (i.e. you should always insert or append just strings). If you mix types -- for instance lists and strings -- you will likely see AttributeError exceptions or other problems. """ def __init__(self, valuesList, valuesDescr, prefix=None): """ Initializes a list restricted to containing certain values. @param valuesList: List of valid values. @param valuesDescr: Short string describing list of values. @param prefix: Prefix to use in error messages (None results in prefix "Item") """ super(RestrictedContentList, self).__init__() self.prefix = "Item" if prefix is not None: self.prefix = prefix self.valuesList = valuesList self.valuesDescr = valuesDescr def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is not in the values list. """ if item not in self.valuesList: raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is not in the values list. """ if item not in self.valuesList: raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If item is not in the values list. """ for item in seq: if item not in self.valuesList: raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) list.extend(self, seq) ######################################################################## # RegexMatchList class definition ######################################################################## class RegexMatchList(UnorderedList): """ Class representing a list containing only strings that match a regular expression. If C{emptyAllowed} is passed in as C{False}, then empty strings are explicitly disallowed, even if they happen to match the regular expression. (C{None} values are always disallowed, since string operations are not permitted on C{None}.) This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list matches the indicated regular expression. @note: If you try to put values that are not strings into the list, you will likely get either TypeError or AttributeError exceptions as a result. """ def __init__(self, valuesRegex, emptyAllowed=True, prefix=None): """ Initializes a list restricted to containing certain values. @param valuesRegex: Regular expression that must be matched, as a string @param emptyAllowed: Indicates whether empty or None values are allowed. @param prefix: Prefix to use in error messages (None results in prefix "Item") """ super(RegexMatchList, self).__init__() self.prefix = "Item" if prefix is not None: self.prefix = prefix self.valuesRegex = valuesRegex self.emptyAllowed = emptyAllowed self.pattern = re.compile(self.valuesRegex) def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is None @raise ValueError: If item is empty and empty values are not allowed @raise ValueError: If item does not match the configured regular expression """ if item is None or (not self.emptyAllowed and item == ""): raise ValueError("%s cannot be empty." % self.prefix) if not self.pattern.search(item): raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is None @raise ValueError: If item is empty and empty values are not allowed @raise ValueError: If item does not match the configured regular expression """ if item is None or (not self.emptyAllowed and item == ""): raise ValueError("%s cannot be empty." % self.prefix) if not self.pattern.search(item): raise ValueError("%s is not valid [%s]" % (self.prefix, item)) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If any item is None @raise ValueError: If any item is empty and empty values are not allowed @raise ValueError: If any item does not match the configured regular expression """ for item in seq: if item is None or (not self.emptyAllowed and item == ""): raise ValueError("%s cannot be empty." % self.prefix) if not self.pattern.search(item): raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) list.extend(self, seq) ######################################################################## # RegexList class definition ######################################################################## class RegexList(UnorderedList): """ Class representing a list of valid regular expression strings. This is an unordered list. We override the C{append}, C{insert} and C{extend} methods to ensure that any item added to the list is a valid regular expression. """ def append(self, item): """ Overrides the standard C{append} method. @raise ValueError: If item is not an absolute path. """ try: re.compile(item) except re.error: raise ValueError("Not a valid regular expression: [%s]" % item) list.append(self, item) def insert(self, index, item): """ Overrides the standard C{insert} method. @raise ValueError: If item is not an absolute path. """ try: re.compile(item) except re.error: raise ValueError("Not a valid regular expression: [%s]" % item) list.insert(self, index, item) def extend(self, seq): """ Overrides the standard C{insert} method. @raise ValueError: If any item is not an absolute path. """ for item in seq: try: re.compile(item) except re.error: raise ValueError("Not a valid regular expression: [%s]" % item) for item in seq: list.append(self, item) ######################################################################## # Directed graph implementation ######################################################################## class _Vertex(object): """ Represents a vertex (or node) in a directed graph. """ def __init__(self, name): """ Constructor. @param name: Name of this graph vertex. @type name: String value. """ self.name = name self.endpoints = [] self.state = None class DirectedGraph(object): """ Represents a directed graph. A graph B{G=(V,E)} consists of a set of vertices B{V} together with a set B{E} of vertex pairs or edges. In a directed graph, each edge also has an associated direction (from vertext B{v1} to vertex B{v2}). A C{DirectedGraph} object provides a way to construct a directed graph and execute a depth- first search. This data structure was designed based on the graphing chapter in U{The Algorithm Design Manual}, by Steven S. Skiena. This class is intended to be used by Cedar Backup for dependency ordering. Because of this, it's not quite general-purpose. Unlike a "general" graph, every vertex in this graph has at least one edge pointing to it, from a special "start" vertex. This is so no vertices get "lost" either because they have no dependencies or because nothing depends on them. """ _UNDISCOVERED = 0 _DISCOVERED = 1 _EXPLORED = 2 def __init__(self, name): """ Directed graph constructor. @param name: Name of this graph. @type name: String value. """ if name is None or name == "": raise ValueError("Graph name must be non-empty.") self._name = name self._vertices = {} self._startVertex = _Vertex(None) # start vertex is only vertex with no name def __repr__(self): """ Official string representation for class instance. """ return "DirectedGraph(%s)" % self.name def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ # pylint: disable=W0212 if other is None: return 1 if self.name != other.name: if self.name < other.name: return -1 else: return 1 if self._vertices != other._vertices: if self._vertices < other._vertices: return -1 else: return 1 return 0 def _getName(self): """ Property target used to get the graph name. """ return self._name name = property(_getName, None, None, "Name of the graph.") def createVertex(self, name): """ Creates a named vertex. @param name: vertex name @raise ValueError: If the vertex name is C{None} or empty. """ if name is None or name == "": raise ValueError("Vertex name must be non-empty.") vertex = _Vertex(name) self._startVertex.endpoints.append(vertex) # so every vertex is connected at least once self._vertices[name] = vertex def createEdge(self, start, finish): """ Adds an edge with an associated direction, from C{start} vertex to C{finish} vertex. @param start: Name of start vertex. @param finish: Name of finish vertex. @raise ValueError: If one of the named vertices is unknown. """ try: startVertex = self._vertices[start] finishVertex = self._vertices[finish] startVertex.endpoints.append(finishVertex) except KeyError, e: raise ValueError("Vertex [%s] could not be found." % e) def topologicalSort(self): """ Implements a topological sort of the graph. This method also enforces that the graph is a directed acyclic graph, which is a requirement of a topological sort. A directed acyclic graph (or "DAG") is a directed graph with no directed cycles. A topological sort of a DAG is an ordering on the vertices such that all edges go from left to right. Only an acyclic graph can have a topological sort, but any DAG has at least one topological sort. Since a topological sort only makes sense for an acyclic graph, this method throws an exception if a cycle is found. A depth-first search only makes sense if the graph is acyclic. If the graph contains any cycles, it is not possible to determine a consistent ordering for the vertices. @note: If a particular vertex has no edges, then its position in the final list depends on the order in which the vertices were created in the graph. If you're using this method to determine a dependency order, this makes sense: a vertex with no dependencies can go anywhere (and will). @return: Ordering on the vertices so that all edges go from left to right. @raise ValueError: If a cycle is found in the graph. """ ordering = [] for key in self._vertices: vertex = self._vertices[key] vertex.state = self._UNDISCOVERED for key in self._vertices: vertex = self._vertices[key] if vertex.state == self._UNDISCOVERED: self._topologicalSort(self._startVertex, ordering) return ordering def _topologicalSort(self, vertex, ordering): """ Recursive depth first search function implementing topological sort. @param vertex: Vertex to search @param ordering: List of vertices in proper order """ vertex.state = self._DISCOVERED for endpoint in vertex.endpoints: if endpoint.state == self._UNDISCOVERED: self._topologicalSort(endpoint, ordering) elif endpoint.state != self._EXPLORED: raise ValueError("Cycle found in graph (found '%s' while searching '%s')." % (vertex.name, endpoint.name)) if vertex.name is not None: ordering.insert(0, vertex.name) vertex.state = self._EXPLORED ######################################################################## # PathResolverSingleton class definition ######################################################################## class PathResolverSingleton(object): """ Singleton used for resolving executable paths. Various functions throughout Cedar Backup (including extensions) need a way to resolve the path of executables that they use. For instance, the image functionality needs to find the C{mkisofs} executable, and the Subversion extension needs to find the C{svnlook} executable. Cedar Backup's original behavior was to assume that the simple name (C{"svnlook"} or whatever) was available on the caller's C{$PATH}, and to fail otherwise. However, this turns out to be less than ideal, since for instance the root user might not always have executables like C{svnlook} in its path. One solution is to specify a path (either via an absolute path or some sort of path insertion or path appending mechanism) that would apply to the C{executeCommand()} function. This is not difficult to implement, but it seem like kind of a "big hammer" solution. Besides that, it might also represent a security flaw (for instance, I prefer not to mess with root's C{$PATH} on the application level if I don't have to). The alternative is to set up some sort of configuration for the path to certain executables, i.e. "find C{svnlook} in C{/usr/local/bin/svnlook}" or whatever. This PathResolverSingleton aims to provide a good solution to the mapping problem. Callers of all sorts (extensions or not) can get an instance of the singleton. Then, they call the C{lookup} method to try and resolve the executable they are looking for. Through the C{lookup} method, the caller can also specify a default to use if a mapping is not found. This way, with no real effort on the part of the caller, behavior can neatly degrade to something equivalent to the current behavior if there is no special mapping or if the singleton was never initialized in the first place. Even better, extensions automagically get access to the same resolver functionality, and they don't even need to understand how the mapping happens. All extension authors need to do is document what executables their code requires, and the standard resolver configuration section will meet their needs. The class should be initialized once through the constructor somewhere in the main routine. Then, the main routine should call the L{fill} method to fill in the resolver's internal structures. Everyone else who needs to resolve a path will get an instance of the class using L{getInstance} and will then just call the L{lookup} method. @cvar _instance: Holds a reference to the singleton @ivar _mapping: Internal mapping from resource name to path. """ _instance = None # Holds a reference to singleton instance class _Helper(object): """Helper class to provide a singleton factory method.""" def __init__(self): pass def __call__(self, *args, **kw): # pylint: disable=W0212,R0201 if PathResolverSingleton._instance is None: obj = PathResolverSingleton() PathResolverSingleton._instance = obj return PathResolverSingleton._instance getInstance = _Helper() # Method that callers will use to get an instance def __init__(self): """Singleton constructor, which just creates the singleton instance.""" if PathResolverSingleton._instance is not None: raise RuntimeError("Only one instance of PathResolverSingleton is allowed!") PathResolverSingleton._instance = self self._mapping = { } def lookup(self, name, default=None): """ Looks up name and returns the resolved path associated with the name. @param name: Name of the path resource to resolve. @param default: Default to return if resource cannot be resolved. @return: Resolved path associated with name, or default if name can't be resolved. """ value = default if name in self._mapping.keys(): value = self._mapping[name] logger.debug("Resolved command [%s] to [%s].", name, value) return value def fill(self, mapping): """ Fills in the singleton's internal mapping from name to resource. @param mapping: Mapping from resource name to path. @type mapping: Dictionary mapping name to path, both as strings. """ self._mapping = { } for key in mapping.keys(): self._mapping[key] = mapping[key] ######################################################################## # Pipe class definition ######################################################################## class Pipe(Popen): """ Specialized pipe class for use by C{executeCommand}. The L{executeCommand} function needs a specialized way of interacting with a pipe. First, C{executeCommand} only reads from the pipe, and never writes to it. Second, C{executeCommand} needs a way to discard all output written to C{stderr}, as a means of simulating the shell C{2>/dev/null} construct. """ def __init__(self, cmd, bufsize=-1, ignoreStderr=False): stderr = STDOUT if ignoreStderr: devnull = nullDevice() stderr = os.open(devnull, os.O_RDWR) Popen.__init__(self, shell=False, args=cmd, bufsize=bufsize, stdin=None, stdout=PIPE, stderr=stderr) ######################################################################## # Diagnostics class definition ######################################################################## class Diagnostics(object): """ Class holding runtime diagnostic information. Diagnostic information is information that is useful to get from users for debugging purposes. I'm consolidating it all here into one object. @sort: __init__, __repr__, __str__ """ # pylint: disable=R0201 def __init__(self): """ Constructor for the C{Diagnostics} class. """ def __repr__(self): """ Official string representation for class instance. """ return "Diagnostics()" def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def getValues(self): """ Get a map containing all of the diagnostic values. @return: Map from diagnostic name to diagnostic value. """ values = {} values['version'] = self.version values['interpreter'] = self.interpreter values['platform'] = self.platform values['encoding'] = self.encoding values['locale'] = self.locale values['timestamp'] = self.timestamp return values def printDiagnostics(self, fd=sys.stdout, prefix=""): """ Pretty-print diagnostic information to a file descriptor. @param fd: File descriptor used to print information. @param prefix: Prefix string (if any) to place onto printed lines @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ lines = self._buildDiagnosticLines(prefix) for line in lines: fd.write("%s\n" % line) def logDiagnostics(self, method, prefix=""): """ Pretty-print diagnostic information using a logger method. @param method: Logger method to use for logging (i.e. logger.info) @param prefix: Prefix string (if any) to place onto printed lines """ lines = self._buildDiagnosticLines(prefix) for line in lines: method("%s" % line) def _buildDiagnosticLines(self, prefix=""): """ Build a set of pretty-printed diagnostic lines. @param prefix: Prefix string (if any) to place onto printed lines @return: List of strings, not terminated by newlines. """ values = self.getValues() keys = values.keys() keys.sort() tmax = Diagnostics._getMaxLength(keys) + 3 # three extra dots in output lines = [] for key in keys: title = key.title() title += (tmax - len(title)) * '.' value = values[key] line = "%s%s: %s" % (prefix, title, value) lines.append(line) return lines @staticmethod def _getMaxLength(values): """ Get the maximum length from among a list of strings. """ tmax = 0 for value in values: if len(value) > tmax: tmax = len(value) return tmax def _getVersion(self): """ Property target to get the Cedar Backup version. """ return "Cedar Backup %s (%s)" % (VERSION, DATE) def _getInterpreter(self): """ Property target to get the Python interpreter version. """ version = sys.version_info return "Python %d.%d.%d (%s)" % (version[0], version[1], version[2], version[3]) def _getEncoding(self): """ Property target to get the filesystem encoding. """ return sys.getfilesystemencoding() or sys.getdefaultencoding() def _getPlatform(self): """ Property target to get the operating system platform. """ try: if sys.platform.startswith("win"): windowsPlatforms = [ "Windows 3.1", "Windows 95/98/ME", "Windows NT/2000/XP", "Windows CE", ] wininfo = sys.getwindowsversion() # pylint: disable=E1101 winversion = "%d.%d.%d" % (wininfo[0], wininfo[1], wininfo[2]) winplatform = windowsPlatforms[wininfo[3]] wintext = wininfo[4] # i.e. "Service Pack 2" return "%s (%s %s %s)" % (sys.platform, winplatform, winversion, wintext) else: uname = os.uname() sysname = uname[0] # i.e. Linux release = uname[2] # i.e. 2.16.18-2 machine = uname[4] # i.e. i686 return "%s (%s %s %s)" % (sys.platform, sysname, release, machine) except: return sys.platform def _getLocale(self): """ Property target to get the default locale that is in effect. """ try: import locale return locale.getdefaultlocale()[0] except: return "(unknown)" def _getTimestamp(self): """ Property target to get a current date/time stamp. """ try: import datetime return datetime.datetime.utcnow().ctime() + " UTC" except: return "(unknown)" version = property(_getVersion, None, None, "Cedar Backup version.") interpreter = property(_getInterpreter, None, None, "Python interpreter version.") platform = property(_getPlatform, None, None, "Platform identifying information.") encoding = property(_getEncoding, None, None, "Filesystem encoding that is in effect.") locale = property(_getLocale, None, None, "Locale that is in effect.") timestamp = property(_getTimestamp, None, None, "Current timestamp.") ######################################################################## # General utility functions ######################################################################## ###################### # sortDict() function ###################### def sortDict(d): """ Returns the keys of the dictionary sorted by value. There are cuter ways to do this in Python 2.4, but we were originally attempting to stay compatible with Python 2.3. @param d: Dictionary to operate on @return: List of dictionary keys sorted in order by dictionary value. """ items = d.items() items.sort(lambda x, y: cmp(x[1], y[1])) return [key for key, value in items] ######################## # removeKeys() function ######################## def removeKeys(d, keys): """ Removes all of the keys from the dictionary. The dictionary is altered in-place. Each key must exist in the dictionary. @param d: Dictionary to operate on @param keys: List of keys to remove @raise KeyError: If one of the keys does not exist """ for key in keys: del d[key] ######################### # convertSize() function ######################### def convertSize(size, fromUnit, toUnit): """ Converts a size in one unit to a size in another unit. This is just a convenience function so that the functionality can be implemented in just one place. Internally, we convert values to bytes and then to the final unit. The available units are: - C{UNIT_BYTES} - Bytes - C{UNIT_KBYTES} - Kilobytes, where 1 kB = 1024 B - C{UNIT_MBYTES} - Megabytes, where 1 MB = 1024 kB - C{UNIT_GBYTES} - Gigabytes, where 1 GB = 1024 MB - C{UNIT_SECTORS} - Sectors, where 1 sector = 2048 B @param size: Size to convert @type size: Integer or float value in units of C{fromUnit} @param fromUnit: Unit to convert from @type fromUnit: One of the units listed above @param toUnit: Unit to convert to @type toUnit: One of the units listed above @return: Number converted to new unit, as a float. @raise ValueError: If one of the units is invalid. """ if size is None: raise ValueError("Cannot convert size of None.") if fromUnit == UNIT_BYTES: byteSize = float(size) elif fromUnit == UNIT_KBYTES: byteSize = float(size) * BYTES_PER_KBYTE elif fromUnit == UNIT_MBYTES: byteSize = float(size) * BYTES_PER_MBYTE elif fromUnit == UNIT_GBYTES: byteSize = float(size) * BYTES_PER_GBYTE elif fromUnit == UNIT_SECTORS: byteSize = float(size) * BYTES_PER_SECTOR else: raise ValueError("Unknown 'from' unit %s." % fromUnit) if toUnit == UNIT_BYTES: return byteSize elif toUnit == UNIT_KBYTES: return byteSize / BYTES_PER_KBYTE elif toUnit == UNIT_MBYTES: return byteSize / BYTES_PER_MBYTE elif toUnit == UNIT_GBYTES: return byteSize / BYTES_PER_GBYTE elif toUnit == UNIT_SECTORS: return byteSize / BYTES_PER_SECTOR else: raise ValueError("Unknown 'to' unit %s." % toUnit) ########################## # displayBytes() function ########################## def displayBytes(bytes, digits=2): # pylint: disable=W0622 """ Format a byte quantity so it can be sensibly displayed. It's rather difficult to look at a number like "72372224 bytes" and get any meaningful information out of it. It would be more useful to see something like "69.02 MB". That's what this function does. Any time you want to display a byte value, i.e.:: print "Size: %s bytes" % bytes Call this function instead:: print "Size: %s" % displayBytes(bytes) What comes out will be sensibly formatted. The indicated number of digits will be listed after the decimal point, rounded based on whatever rules are used by Python's standard C{%f} string format specifier. (Values less than 1 kB will be listed in bytes and will not have a decimal point, since the concept of a fractional byte is nonsensical.) @param bytes: Byte quantity. @type bytes: Integer number of bytes. @param digits: Number of digits to display after the decimal point. @type digits: Integer value, typically 2-5. @return: String, formatted for sensible display. """ if bytes is None: raise ValueError("Cannot display byte value of None.") bytes = float(bytes) if math.fabs(bytes) < BYTES_PER_KBYTE: fmt = "%.0f bytes" value = bytes elif math.fabs(bytes) < BYTES_PER_MBYTE: fmt = "%." + "%d" % digits + "f kB" value = bytes / BYTES_PER_KBYTE elif math.fabs(bytes) < BYTES_PER_GBYTE: fmt = "%." + "%d" % digits + "f MB" value = bytes / BYTES_PER_MBYTE else: fmt = "%." + "%d" % digits + "f GB" value = bytes / BYTES_PER_GBYTE return fmt % value ################################## # getFunctionReference() function ################################## def getFunctionReference(module, function): """ Gets a reference to a named function. This does some hokey-pokey to get back a reference to a dynamically named function. For instance, say you wanted to get a reference to the C{os.path.isdir} function. You could use:: myfunc = getFunctionReference("os.path", "isdir") Although we won't bomb out directly, behavior is pretty much undefined if you pass in C{None} or C{""} for either C{module} or C{function}. The only validation we enforce is that whatever we get back must be callable. I derived this code based on the internals of the Python unittest implementation. I don't claim to completely understand how it works. @param module: Name of module associated with function. @type module: Something like "os.path" or "CedarBackup2.util" @param function: Name of function @type function: Something like "isdir" or "getUidGid" @return: Reference to function associated with name. @raise ImportError: If the function cannot be found. @raise ValueError: If the resulting reference is not callable. @copyright: Some of this code, prior to customization, was originally part of the Python 2.3 codebase. Python code is copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved. """ parts = [] if module is not None and module != "": parts = module.split(".") if function is not None and function != "": parts.append(function) copy = parts[:] while copy: try: module = __import__(string.join(copy, ".")) break except ImportError: del copy[-1] if not copy: raise parts = parts[1:] obj = module for part in parts: obj = getattr(obj, part) if not callable(obj): raise ValueError("Reference to %s.%s is not callable." % (module, function)) return obj ####################### # getUidGid() function ####################### def getUidGid(user, group): """ Get the uid/gid associated with a user/group pair This is a no-op if user/group functionality is not available on the platform. @param user: User name @type user: User name as a string @param group: Group name @type group: Group name as a string @return: Tuple C{(uid, gid)} matching passed-in user and group. @raise ValueError: If the ownership user/group values are invalid """ if _UID_GID_AVAILABLE: try: uid = pwd.getpwnam(user)[2] gid = grp.getgrnam(group)[2] return (uid, gid) except Exception, e: logger.debug("Error looking up uid and gid for [%s:%s]: %s", user, group, e) raise ValueError("Unable to lookup up uid and gid for passed in user/group.") else: return (0, 0) ############################# # changeOwnership() function ############################# def changeOwnership(path, user, group): """ Changes ownership of path to match the user and group. This is a no-op if user/group functionality is not available on the platform, or if the either passed-in user or group is C{None}. Further, we won't even try to do it unless running as root, since it's unlikely to work. @param path: Path whose ownership to change. @param user: User which owns file. @param group: Group which owns file. """ if _UID_GID_AVAILABLE: if user is None or group is None: logger.debug("User or group is None, so not attempting to change owner on [%s].", path) elif not isRunningAsRoot(): logger.debug("Not root, so not attempting to change owner on [%s].", path) else: try: (uid, gid) = getUidGid(user, group) os.chown(path, uid, gid) except Exception, e: logger.error("Error changing ownership of [%s]: %s", path, e) ############################# # isRunningAsRoot() function ############################# def isRunningAsRoot(): """ Indicates whether the program is running as the root user. """ return os.getuid() == 0 ############################## # splitCommandLine() function ############################## def splitCommandLine(commandLine): """ Splits a command line string into a list of arguments. Unfortunately, there is no "standard" way to parse a command line string, and it's actually not an easy problem to solve portably (essentially, we have to emulate the shell argument-processing logic). This code only respects double quotes (C{"}) for grouping arguments, not single quotes (C{'}). Make sure you take this into account when building your command line. Incidentally, I found this particular parsing method while digging around in Google Groups, and I tweaked it for my own use. @param commandLine: Command line string @type commandLine: String, i.e. "cback --verbose stage store" @return: List of arguments, suitable for passing to C{popen2}. @raise ValueError: If the command line is None. """ if commandLine is None: raise ValueError("Cannot split command line of None.") fields = re.findall('[^ "]+|"[^"]+"', commandLine) fields = [field.replace('"', '') for field in fields] return fields ############################ # resolveCommand() function ############################ def resolveCommand(command): """ Resolves the real path to a command through the path resolver mechanism. Both extensions and standard Cedar Backup functionality need a way to resolve the "real" location of various executables. Normally, they assume that these executables are on the system path, but some callers need to specify an alternate location. Ideally, we want to handle this configuration in a central location. The Cedar Backup path resolver mechanism (a singleton called L{PathResolverSingleton}) provides the central location to store the mappings. This function wraps access to the singleton, and is what all functions (extensions or standard functionality) should call if they need to find a command. The passed-in command must actually be a list, in the standard form used by all existing Cedar Backup code (something like C{["svnlook", ]}). The lookup will actually be done on the first element in the list, and the returned command will always be in list form as well. If the passed-in command can't be resolved or no mapping exists, then the command itself will be returned unchanged. This way, we neatly fall back on default behavior if we have no sensible alternative. @param command: Command to resolve. @type command: List form of command, i.e. C{["svnlook", ]}. @return: Path to command or just command itself if no mapping exists. """ singleton = PathResolverSingleton.getInstance() name = command[0] result = command[:] result[0] = singleton.lookup(name, name) return result ############################ # executeCommand() function ############################ def executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None): """ Executes a shell command, hopefully in a safe way. This function exists to replace direct calls to C{os.popen} in the Cedar Backup code. It's not safe to call a function such as C{os.popen()} with untrusted arguments, since that can cause problems if the string contains non-safe variables or other constructs (imagine that the argument is C{$WHATEVER}, but C{$WHATEVER} contains something like C{"; rm -fR ~/; echo"} in the current environment). Instead, it's safer to pass a list of arguments in the style supported bt C{popen2} or C{popen4}. This function actually uses a specialized C{Pipe} class implemented using either C{subprocess.Popen} or C{popen2.Popen4}. Under the normal case, this function will return a tuple of C{(status, None)} where the status is the wait-encoded return status of the call per the C{popen2.Popen4} documentation. If C{returnOutput} is passed in as C{True}, the function will return a tuple of C{(status, output)} where C{output} is a list of strings, one entry per line in the output from the command. Output is always logged to the C{outputLogger.info()} target, regardless of whether it's returned. By default, C{stdout} and C{stderr} will be intermingled in the output. However, if you pass in C{ignoreStderr=True}, then only C{stdout} will be included in the output. The C{doNotLog} parameter exists so that callers can force the function to not log command output to the debug log. Normally, you would want to log. However, if you're using this function to write huge output files (i.e. database backups written to C{stdout}) then you might want to avoid putting all that information into the debug log. The C{outputFile} parameter exists to make it easier for a caller to push output into a file, i.e. as a substitute for redirection to a file. If this value is passed in, each time a line of output is generated, it will be written to the file using C{outputFile.write()}. At the end, the file descriptor will be flushed using C{outputFile.flush()}. The caller maintains responsibility for closing the file object appropriately. @note: I know that it's a bit confusing that the command and the arguments are both lists. I could have just required the caller to pass in one big list. However, I think it makes some sense to keep the command (the constant part of what we're executing, i.e. C{"scp -B"}) separate from its arguments, even if they both end up looking kind of similar. @note: You cannot redirect output via shell constructs (i.e. C{>file}, C{2>/dev/null}, etc.) using this function. The redirection string would be passed to the command just like any other argument. However, you can implement the equivalent to redirection using C{ignoreStderr} and C{outputFile}, as discussed above. @note: The operating system environment is partially sanitized before the command is invoked. See L{sanitizeEnvironment} for details. @param command: Shell command to execute @type command: List of individual arguments that make up the command @param args: List of arguments to the command @type args: List of additional arguments to the command @param returnOutput: Indicates whether to return the output of the command @type returnOutput: Boolean C{True} or C{False} @param ignoreStderr: Whether stderr should be discarded @type ignoreStderr: Boolean True or False @param doNotLog: Indicates that output should not be logged. @type doNotLog: Boolean C{True} or C{False} @param outputFile: File object that all output should be written to. @type outputFile: File object as returned from C{open()} or C{file()}. @return: Tuple of C{(result, output)} as described above. """ logger.debug("Executing command %s with args %s.", command, args) outputLogger.info("Executing command %s with args %s.", command, args) if doNotLog: logger.debug("Note: output will not be logged, per the doNotLog flag.") outputLogger.info("Note: output will not be logged, per the doNotLog flag.") output = [] fields = command[:] # make sure to copy it so we don't destroy it fields.extend(args) try: sanitizeEnvironment() # make sure we have a consistent environment try: pipe = Pipe(fields, ignoreStderr=ignoreStderr) except OSError: # On some platforms (i.e. Cygwin) this intermittently fails the first time we do it. # So, we attempt it a second time and if that works, we just go on as usual. # The problem appears to be that we sometimes get a bad stderr file descriptor. pipe = Pipe(fields, ignoreStderr=ignoreStderr) while True: line = pipe.stdout.readline() if not line: break if returnOutput: output.append(line) if outputFile is not None: outputFile.write(line) if not doNotLog: outputLogger.info(line[:-1]) # this way the log will (hopefully) get updated in realtime if outputFile is not None: try: # note, not every file-like object can be flushed outputFile.flush() except: pass if returnOutput: return (pipe.wait(), output) else: return (pipe.wait(), None) except OSError, e: try: if returnOutput: if output != []: return (pipe.wait(), output) else: return (pipe.wait(), [ e, ]) else: return (pipe.wait(), None) except UnboundLocalError: # pipe not set if returnOutput: return (256, []) else: return (256, None) ############################## # calculateFileAge() function ############################## def calculateFileAge(path): """ Calculates the age (in days) of a file. The "age" of a file is the amount of time since the file was last used, per the most recent of the file's C{st_atime} and C{st_mtime} values. Technically, we only intend this function to work with files, but it will probably work with anything on the filesystem. @param path: Path to a file on disk. @return: Age of the file in days (possibly fractional). @raise OSError: If the file doesn't exist. """ currentTime = int(time.time()) fileStats = os.stat(path) lastUse = max(fileStats.st_atime, fileStats.st_mtime) # "most recent" is "largest" ageInSeconds = currentTime - lastUse ageInDays = ageInSeconds / SECONDS_PER_DAY return ageInDays ################### # mount() function ################### def mount(devicePath, mountPoint, fsType): """ Mounts the indicated device at the indicated mount point. For instance, to mount a CD, you might use device path C{/dev/cdrw}, mount point C{/media/cdrw} and filesystem type C{iso9660}. You can safely use any filesystem type that is supported by C{mount} on your platform. If the type is C{None}, we'll attempt to let C{mount} auto-detect it. This may or may not work on all systems. @note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line C{"mount"} command, like UNIXes. It won't work on Windows. @param devicePath: Path of device to be mounted. @param mountPoint: Path that device should be mounted at. @param fsType: Type of the filesystem assumed to be available via the device. @raise IOError: If the device cannot be mounted. """ if fsType is None: args = [ devicePath, mountPoint ] else: args = [ "-t", fsType, devicePath, mountPoint ] command = resolveCommand(MOUNT_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True)[0] if result != 0: raise IOError("Error [%d] mounting [%s] at [%s] as [%s]." % (result, devicePath, mountPoint, fsType)) ##################### # unmount() function ##################### def unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0): """ Unmounts whatever device is mounted at the indicated mount point. Sometimes, it might not be possible to unmount the mount point immediately, if there are still files open there. Use the C{attempts} and C{waitSeconds} arguments to indicate how many unmount attempts to make and how many seconds to wait between attempts. If you pass in zero attempts, no attempts will be made (duh). If the indicated mount point is not really a mount point per C{os.path.ismount()}, then it will be ignored. This seems to be a safer check then looking through C{/etc/mtab}, since C{ismount()} is already in the Python standard library and is documented as working on all POSIX systems. If C{removeAfter} is C{True}, then the mount point will be removed using C{os.rmdir()} after the unmount action succeeds. If for some reason the mount point is not a directory, then it will not be removed. @note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line C{"mount"} command, like UNIXes. It won't work on Windows. @param mountPoint: Mount point to be unmounted. @param removeAfter: Remove the mount point after unmounting it. @param attempts: Number of times to attempt the unmount. @param waitSeconds: Number of seconds to wait between repeated attempts. @raise IOError: If the mount point is still mounted after attempts are exhausted. """ if os.path.ismount(mountPoint): for attempt in range(0, attempts): logger.debug("Making attempt %d to unmount [%s].", attempt, mountPoint) command = resolveCommand(UMOUNT_COMMAND) result = executeCommand(command, [ mountPoint, ], returnOutput=False, ignoreStderr=True)[0] if result != 0: logger.error("Error [%d] unmounting [%s] on attempt %d.", result, mountPoint, attempt) elif os.path.ismount(mountPoint): logger.error("After attempt %d, [%s] is still mounted.", attempt, mountPoint) else: logger.debug("Successfully unmounted [%s] on attempt %d.", mountPoint, attempt) break # this will cause us to skip the loop else: clause if attempt+1 < attempts: # i.e. this isn't the last attempt if waitSeconds > 0: logger.info("Sleeping %d second(s) before next unmount attempt.", waitSeconds) time.sleep(waitSeconds) else: if os.path.ismount(mountPoint): raise IOError("Unable to unmount [%s] after %d attempts." % (mountPoint, attempts)) logger.info("Mount point [%s] seems to have finally gone away.", mountPoint) if os.path.isdir(mountPoint) and removeAfter: logger.debug("Removing mount point [%s].", mountPoint) os.rmdir(mountPoint) ########################### # deviceMounted() function ########################### def deviceMounted(devicePath): """ Indicates whether a specific filesystem device is currently mounted. We determine whether the device is mounted by looking through the system's C{mtab} file. This file shows every currently-mounted filesystem, ordered by device. We only do the check if the C{mtab} file exists and is readable. Otherwise, we assume that the device is not mounted. @note: This only works on platforms that have a concept of an mtab file to show mounted volumes, like UNIXes. It won't work on Windows. @param devicePath: Path of device to be checked @return: True if device is mounted, false otherwise. """ if os.path.exists(MTAB_FILE) and os.access(MTAB_FILE, os.R_OK): realPath = os.path.realpath(devicePath) lines = open(MTAB_FILE).readlines() for line in lines: (mountDevice, mountPoint, remainder) = line.split(None, 2) if mountDevice in [ devicePath, realPath, ]: logger.debug("Device [%s] is mounted at [%s].", devicePath, mountPoint) return True return False ######################## # encodePath() function ######################## def encodePath(path): r""" Safely encodes a filesystem path. Many Python filesystem functions, such as C{os.listdir}, behave differently if they are passed unicode arguments versus simple string arguments. For instance, C{os.listdir} generally returns unicode path names if it is passed a unicode argument, and string pathnames if it is passed a string argument. However, this behavior often isn't as consistent as we might like. As an example, C{os.listdir} "gives up" if it finds a filename that it can't properly encode given the current locale settings. This means that the returned list is a mixed set of unicode and simple string paths. This has consequences later, because other filesystem functions like C{os.path.join} will blow up if they are given one string path and one unicode path. On comp.lang.python, Martin v. Lwis explained the C{os.listdir} behavior like this:: The operating system (POSIX) does not have the inherent notion that file names are character strings. Instead, in POSIX, file names are primarily byte strings. There are some bytes which are interpreted as characters (e.g. '\x2e', which is '.', or '\x2f', which is '/'), but apart from that, most OS layers think these are just bytes. Now, most *people* think that file names are character strings. To interpret a file name as a character string, you need to know what the encoding is to interpret the file names (which are byte strings) as character strings. There is, unfortunately, no operating system API to carry the notion of a file system encoding. By convention, the locale settings should be used to establish this encoding, in particular the LC_CTYPE facet of the locale. This is defined in the environment variables LC_CTYPE, LC_ALL, and LANG (searched in this order). If LANG is not set, the "C" locale is assumed, which uses ASCII as its file system encoding. In this locale, '\xe2\x99\xaa\xe2\x99\xac' is not a valid file name (at least it cannot be interpreted as characters, and hence not be converted to Unicode). Now, your Python script has requested that all file names *should* be returned as character (ie. Unicode) strings, but Python cannot comply, since there is no way to find out what this byte string means, in terms of characters. So we have three options: 1. Skip this string, only return the ones that can be converted to Unicode. Give the user the impression the file does not exist. 2. Return the string as a byte string 3. Refuse to listdir altogether, raising an exception (i.e. return nothing) Python has chosen alternative 2, allowing the application to implement 1 or 3 on top of that if it wants to (or come up with other strategies, such as user feedback). As a solution, he suggests that rather than passing unicode paths into the filesystem functions, that I should sensibly encode the path first. That is what this function accomplishes. Any function which takes a filesystem path as an argument should encode it first, before using it for any other purpose. I confess I still don't completely understand how this works. On a system with filesystem encoding "ISO-8859-1", a path C{u"\xe2\x99\xaa\xe2\x99\xac"} is converted into the string C{"\xe2\x99\xaa\xe2\x99\xac"}. However, on a system with a "utf-8" encoding, the result is a completely different string: C{"\xc3\xa2\xc2\x99\xc2\xaa\xc3\xa2\xc2\x99\xc2\xac"}. A quick test where I write to the first filename and open the second proves that the two strings represent the same file on disk, which is all I really care about. @note: As a special case, if C{path} is C{None}, then this function will return C{None}. @note: To provide several examples of encoding values, my Debian sarge box with an ext3 filesystem has Python filesystem encoding C{ISO-8859-1}. User Anarcat's Debian box with a xfs filesystem has filesystem encoding C{ANSI_X3.4-1968}. Both my iBook G4 running Mac OS X 10.4 and user Dag Rende's SuSE 9.3 box both have filesystem encoding C{UTF-8}. @note: Just because a filesystem has C{UTF-8} encoding doesn't mean that it will be able to handle all extended-character filenames. For instance, certain extended-character (but not UTF-8) filenames -- like the ones in the regression test tar file C{test/data/tree13.tar.gz} -- are not valid under Mac OS X, and it's not even possible to extract them from the tarfile on that platform. @param path: Path to encode @return: Path, as a string, encoded appropriately @raise ValueError: If the path cannot be encoded properly. """ if path is None: return path try: if isinstance(path, unicode): encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() path = path.encode(encoding) return path except UnicodeError: raise ValueError("Path could not be safely encoded as %s." % encoding) ######################## # nullDevice() function ######################## def nullDevice(): """ Attempts to portably return the null device on this system. The null device is something like C{/dev/null} on a UNIX system. The name varies on other platforms. """ return os.devnull ############################## # deriveDayOfWeek() function ############################## def deriveDayOfWeek(dayName): """ Converts English day name to numeric day of week as from C{time.localtime}. For instance, the day C{monday} would be converted to the number C{0}. @param dayName: Day of week to convert @type dayName: string, i.e. C{"monday"}, C{"tuesday"}, etc. @returns: Integer, where Monday is 0 and Sunday is 6; or -1 if no conversion is possible. """ if dayName.lower() == "monday": return 0 elif dayName.lower() == "tuesday": return 1 elif dayName.lower() == "wednesday": return 2 elif dayName.lower() == "thursday": return 3 elif dayName.lower() == "friday": return 4 elif dayName.lower() == "saturday": return 5 elif dayName.lower() == "sunday": return 6 else: return -1 # What else can we do?? Thrown an exception, I guess. ########################### # isStartOfWeek() function ########################### def isStartOfWeek(startingDay): """ Indicates whether "today" is the backup starting day per configuration. If the current day's English name matches the indicated starting day, then today is a starting day. @param startingDay: Configured starting day. @type startingDay: string, i.e. C{"monday"}, C{"tuesday"}, etc. @return: Boolean indicating whether today is the starting day. """ value = time.localtime().tm_wday == deriveDayOfWeek(startingDay) if value: logger.debug("Today is the start of the week.") else: logger.debug("Today is NOT the start of the week.") return value ################################# # buildNormalizedPath() function ################################# def buildNormalizedPath(path): """ Returns a "normalized" path based on a path name. A normalized path is a representation of a path that is also a valid file name. To make a valid file name out of a complete path, we have to convert or remove some characters that are significant to the filesystem -- in particular, the path separator and any leading C{'.'} character (which would cause the file to be hidden in a file listing). Note that this is a one-way transformation -- you can't safely derive the original path from the normalized path. To normalize a path, we begin by looking at the first character. If the first character is C{'/'} or C{'\\'}, it gets removed. If the first character is C{'.'}, it gets converted to C{'_'}. Then, we look through the rest of the path and convert all remaining C{'/'} or C{'\\'} characters C{'-'}, and all remaining whitespace characters to C{'_'}. As a special case, a path consisting only of a single C{'/'} or C{'\\'} character will be converted to C{'-'}. @param path: Path to normalize @return: Normalized path as described above. @raise ValueError: If the path is None """ if path is None: raise ValueError("Cannot normalize path None.") elif len(path) == 0: return path elif path == "/" or path == "\\": return "-" else: normalized = path normalized = re.sub(r"^\/", "", normalized) # remove leading '/' normalized = re.sub(r"^\\", "", normalized) # remove leading '\' normalized = re.sub(r"^\.", "_", normalized) # convert leading '.' to '_' so file won't be hidden normalized = re.sub(r"\/", "-", normalized) # convert all '/' characters to '-' normalized = re.sub(r"\\", "-", normalized) # convert all '\' characters to '-' normalized = re.sub(r"\s", "_", normalized) # convert all whitespace to '_' return normalized ################################# # sanitizeEnvironment() function ################################# def sanitizeEnvironment(): """ Sanitizes the operating system environment. The operating system environment is contained in C{os.environ}. This method sanitizes the contents of that dictionary. Currently, all it does is reset the locale (removing C{$LC_*}) and set the default language (C{$LANG}) to L{DEFAULT_LANGUAGE}. This way, we can count on consistent localization regardless of what the end-user has configured. This is important for code that needs to parse program output. The C{os.environ} dictionary is modifed in-place. If C{$LANG} is already set to the proper value, it is not re-set, so we can avoid the memory leaks that are documented to occur on BSD-based systems. @return: Copy of the sanitized environment. """ for var in LOCALE_VARS: if os.environ.has_key(var): del os.environ[var] if os.environ.has_key(LANG_VAR): if os.environ[LANG_VAR] != DEFAULT_LANGUAGE: # no need to reset if it exists (avoid leaks on BSD systems) os.environ[LANG_VAR] = DEFAULT_LANGUAGE return os.environ.copy() ############################# # dereferenceLink() function ############################# def dereferenceLink(path, absolute=True): """ Deference a soft link, optionally normalizing it to an absolute path. @param path: Path of link to dereference @param absolute: Whether to normalize the result to an absolute path @return: Dereferenced path, or original path if original is not a link. """ if os.path.islink(path): result = os.readlink(path) if absolute and not os.path.isabs(result): result = os.path.abspath(os.path.join(os.path.dirname(path), result)) return result return path ######################### # checkUnique() function ######################### def checkUnique(prefix, values): """ Checks that all values are unique. The values list is checked for duplicate values. If there are duplicates, an exception is thrown. All duplicate values are listed in the exception. @param prefix: Prefix to use in the thrown exception @param values: List of values to check @raise ValueError: If there are duplicates in the list """ values.sort() duplicates = [] for i in range(1, len(values)): if values[i-1] == values[i]: duplicates.append(values[i]) if duplicates: raise ValueError("%s %s" % (prefix, duplicates)) ####################################### # parseCommaSeparatedString() function ####################################### def parseCommaSeparatedString(commaString): """ Parses a list of values out of a comma-separated string. The items in the list are split by comma, and then have whitespace stripped. As a special case, if C{commaString} is C{None}, then C{None} will be returned. @param commaString: List of values in comma-separated string format. @return: Values from commaString split into a list, or C{None}. """ if commaString is None: return None else: pass1 = commaString.split(",") pass2 = [] for item in pass1: item = item.strip() if len(item) > 0: pass2.append(item) return pass2 CedarBackup2-2.26.5/CedarBackup2/peer.py0000664000175000017500000015257712560016766021343 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides backup peer-related objects. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides backup peer-related objects and utility functions. @sort: LocalPeer, RemotePeer @var DEF_COLLECT_INDICATOR: Name of the default collect indicator file. @var DEF_STAGE_INDICATOR: Name of the default stage indicator file. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import shutil # Cedar Backup modules from CedarBackup2.filesystem import FilesystemList from CedarBackup2.util import resolveCommand, executeCommand, isRunningAsRoot from CedarBackup2.util import splitCommandLine, encodePath from CedarBackup2.config import VALID_FAILURE_MODES ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.peer") DEF_RCP_COMMAND = [ "/usr/bin/scp", "-B", "-q", "-C" ] DEF_RSH_COMMAND = [ "/usr/bin/ssh", ] DEF_CBACK_COMMAND = "/usr/bin/cback" DEF_COLLECT_INDICATOR = "cback.collect" DEF_STAGE_INDICATOR = "cback.stage" SU_COMMAND = [ "su" ] ######################################################################## # LocalPeer class definition ######################################################################## class LocalPeer(object): ###################### # Class documentation ###################### """ Backup peer representing a local peer in a backup pool. This is a class representing a local (non-network) peer in a backup pool. Local peers are backed up by simple filesystem copy operations. A local peer has associated with it a name (typically, but not necessarily, a hostname) and a collect directory. The public methods other than the constructor are part of a "backup peer" interface shared with the C{RemotePeer} class. @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, _copyLocalDir, _copyLocalFile, name, collectDir """ ############## # Constructor ############## def __init__(self, name, collectDir, ignoreFailureMode=None): """ Initializes a local backup peer. Note that the collect directory must be an absolute path, but does not have to exist when the object is instantiated. We do a lazy validation on this value since we could (potentially) be creating peer objects before an ongoing backup completed. @param name: Name of the backup peer @type name: String, typically a hostname @param collectDir: Path to the peer's collect directory @type collectDir: String representing an absolute local path on disk @param ignoreFailureMode: Ignore failure mode for this peer @type ignoreFailureMode: One of VALID_FAILURE_MODES @raise ValueError: If the name is empty. @raise ValueError: If collect directory is not an absolute path. """ self._name = None self._collectDir = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.ignoreFailureMode = ignoreFailureMode ############# # Properties ############# def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string and cannot be C{None}. @raise ValueError: If the value is an empty string or C{None}. """ if value is None or len(value) < 1: raise ValueError("Peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path and cannot be C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is C{None} or is not an absolute path. @raise ValueError: If a path cannot be encoded properly. """ if value is None or not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer.") collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ################# # Public methods ################# def stagePeer(self, targetDir, ownership=None, permissions=None): """ Stages data from the peer into the indicated local target directory. The collect and target directories must both already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied. @note: The caller is responsible for checking that the indicator exists, if they care. This function only stages the files within the directory. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param targetDir: Target directory to write data into @type targetDir: String representing a directory on disk @param ownership: Owner and group that the staged files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If collect directory is not a directory or does not exist @raise ValueError: If target directory is not a directory, does not exist or is not absolute. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there were no files to stage (i.e. the directory was empty) @raise IOError: If there is an IO error copying a file. @raise OSError: If there is an OS error copying or changing permissions on a file """ targetDir = encodePath(targetDir) if not os.path.isabs(targetDir): logger.debug("Target directory [%s] not an absolute path.", targetDir) raise ValueError("Target directory must be an absolute path.") if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): logger.debug("Collect directory [%s] is not a directory or does not exist on disk.", self.collectDir) raise ValueError("Collect directory is not a directory or does not exist on disk.") if not os.path.exists(targetDir) or not os.path.isdir(targetDir): logger.debug("Target directory [%s] is not a directory or does not exist on disk.", targetDir) raise ValueError("Target directory is not a directory or does not exist on disk.") count = LocalPeer._copyLocalDir(self.collectDir, targetDir, ownership, permissions) if count == 0: raise IOError("Did not copy any files from local peer.") return count def checkCollectIndicator(self, collectIndicator=None): """ Checks the collect indicator in the peer's staging directory. When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. We're "stupid" here - if the collect directory doesn't exist, you'll naturally get back C{False}. If you need to, you can override the name of the collect indicator file by passing in a different name. @param collectIndicator: Name of the collect indicator file to check @type collectIndicator: String representing name of a file in the collect directory @return: Boolean true/false depending on whether the indicator exists. @raise ValueError: If a path cannot be encoded properly. """ collectIndicator = encodePath(collectIndicator) if collectIndicator is None: return os.path.exists(os.path.join(self.collectDir, DEF_COLLECT_INDICATOR)) else: return os.path.exists(os.path.join(self.collectDir, collectIndicator)) def writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None): """ Writes the stage indicator in the peer's staging directory. When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete. If you need to, you can override the name of the stage indicator file by passing in a different name. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param stageIndicator: Name of the indicator file to write @type stageIndicator: String representing name of a file in the collect directory @param ownership: Owner and group that the indicator file should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the indicator file should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @raise ValueError: If collect directory is not a directory or does not exist @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there is an IO error creating the file. @raise OSError: If there is an OS error creating or changing permissions on the file """ stageIndicator = encodePath(stageIndicator) if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): logger.debug("Collect directory [%s] is not a directory or does not exist on disk.", self.collectDir) raise ValueError("Collect directory is not a directory or does not exist on disk.") if stageIndicator is None: fileName = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) else: fileName = os.path.join(self.collectDir, stageIndicator) LocalPeer._copyLocalFile(None, fileName, ownership, permissions) # None for sourceFile results in an empty target ################## # Private methods ################## @staticmethod def _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None): """ Copies files from the source directory to the target directory. This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. The source and target directories are allowed to be soft links to a directory, but besides that soft links are ignored. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param sourceDir: Source directory @type sourceDir: String representing a directory on disk @param targetDir: Target directory @type targetDir: String representing a directory on disk @param ownership: Owner and group that the copied files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If source or target is not a directory or does not exist. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there is an IO error copying the files. @raise OSError: If there is an OS error copying or changing permissions on a files """ filesCopied = 0 sourceDir = encodePath(sourceDir) targetDir = encodePath(targetDir) for fileName in os.listdir(sourceDir): sourceFile = os.path.join(sourceDir, fileName) targetFile = os.path.join(targetDir, fileName) LocalPeer._copyLocalFile(sourceFile, targetFile, ownership, permissions) filesCopied += 1 return filesCopied @staticmethod def _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True): """ Copies a source file to a target file. If the source file is C{None} then the target file will be created or overwritten as an empty file. If the target file is C{None}, this method is a no-op. Attempting to copy a soft link or a directory will result in an exception. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception. @param sourceFile: Source file to copy @type sourceFile: String representing a file on disk, as an absolute path @param targetFile: Target file to create @type targetFile: String representing a file on disk, as an absolute path @param ownership: Owner and group that the copied should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @param overwrite: Indicates whether it's OK to overwrite the target file. @type overwrite: Boolean true/false. @raise ValueError: If the passed-in source file is not a regular file. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If the target file already exists. @raise IOError: If there is an IO error copying the file @raise OSError: If there is an OS error copying or changing permissions on a file """ targetFile = encodePath(targetFile) sourceFile = encodePath(sourceFile) if targetFile is None: return if not overwrite: if os.path.exists(targetFile): raise IOError("Target file [%s] already exists." % targetFile) if sourceFile is None: open(targetFile, "w").write("") else: if os.path.isfile(sourceFile) and not os.path.islink(sourceFile): shutil.copy(sourceFile, targetFile) else: logger.debug("Source [%s] is not a regular file.", sourceFile) raise ValueError("Source is not a regular file.") if ownership is not None: os.chown(targetFile, ownership[0], ownership[1]) if permissions is not None: os.chmod(targetFile, permissions) ######################################################################## # RemotePeer class definition ######################################################################## class RemotePeer(object): ###################### # Class documentation ###################### """ Backup peer representing a remote peer in a backup pool. This is a class representing a remote (networked) peer in a backup pool. Remote peers are backed up using an rcp-compatible copy command. A remote peer has associated with it a name (which must be a valid hostname), a collect directory, a working directory and a copy method (an rcp-compatible command). You can also set an optional local user value. This username will be used as the local user for any remote copies that are required. It can only be used if the root user is executing the backup. The root user will C{su} to the local user and execute the remote copies as that user. The copy method is associated with the peer and not with the actual request to copy, because we can envision that each remote host might have a different connect method. The public methods other than the constructor are part of a "backup peer" interface shared with the C{LocalPeer} class. @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, executeRemoteCommand, executeManagedAction, _getDirContents, _copyRemoteDir, _copyRemoteFile, _pushLocalFile, name, collectDir, remoteUser, rcpCommand, rshCommand, cbackCommand """ ############## # Constructor ############## def __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, ignoreFailureMode=None): """ Initializes a remote backup peer. @note: If provided, each command will eventually be parsed into a list of strings suitable for passing to C{util.executeCommand} in order to avoid security holes related to shell interpolation. This parsing will be done by the L{util.splitCommandLine} function. See the documentation for that function for some important notes about its limitations. @param name: Name of the backup peer @type name: String, must be a valid DNS hostname @param collectDir: Path to the peer's collect directory @type collectDir: String representing an absolute path on the remote peer @param workingDir: Working directory that can be used to create temporary files, etc. @type workingDir: String representing an absolute path on the current host. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via remote shell to the peer @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer @type rshCommand: String representing a system command including required arguments @param cbackCommand: A chack-compatible command to use for executing managed actions @type cbackCommand: String representing a system command including required arguments @param ignoreFailureMode: Ignore failure mode for this peer @type ignoreFailureMode: One of VALID_FAILURE_MODES @raise ValueError: If collect directory is not an absolute path """ self._name = None self._collectDir = None self._workingDir = None self._remoteUser = None self._localUser = None self._rcpCommand = None self._rcpCommandList = None self._rshCommand = None self._rshCommandList = None self._cbackCommand = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.workingDir = workingDir self.remoteUser = remoteUser self.localUser = localUser self.rcpCommand = rcpCommand self.rshCommand = rshCommand self.cbackCommand = cbackCommand self.ignoreFailureMode = ignoreFailureMode ############# # Properties ############# def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string and cannot be C{None}. @raise ValueError: If the value is an empty string or C{None}. """ if value is None or len(value) < 1: raise ValueError("Peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path and cannot be C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is C{None} or is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setWorkingDir(self, value): """ Property target used to set the working directory. The value must be an absolute path and cannot be C{None}. @raise ValueError: If the value is C{None} or is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Working directory must be an absolute path.") self._workingDir = encodePath(value) def _getWorkingDir(self): """ Property target used to get the working directory. """ return self._workingDir def _setRemoteUser(self, value): """ Property target used to set the remote user. The value must be a non-empty string and cannot be C{None}. @raise ValueError: If the value is an empty string or C{None}. """ if value is None or len(value) < 1: raise ValueError("Peer remote user must be a non-empty string.") self._remoteUser = value def _getRemoteUser(self): """ Property target used to get the remote user. """ return self._remoteUser def _setLocalUser(self, value): """ Property target used to set the local user. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("Peer local user must be a non-empty string.") self._localUser = value def _getLocalUser(self): """ Property target used to get the local user. """ return self._localUser def _setRcpCommand(self, value): """ Property target to set the rcp command. The value must be a non-empty string or C{None}. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to L{util.executeCommand} via L{util.splitCommandLine}. However, all the caller will ever see via the property is the actual value they set (which includes seeing C{None}, even if we translate that internally to C{DEF_RCP_COMMAND}). Internally, we should always use C{self._rcpCommandList} if we want the actual command list. @raise ValueError: If the value is an empty string. """ if value is None: self._rcpCommand = None self._rcpCommandList = DEF_RCP_COMMAND else: if len(value) >= 1: self._rcpCommand = value self._rcpCommandList = splitCommandLine(self._rcpCommand) else: raise ValueError("The rcp command must be a non-empty string.") def _getRcpCommand(self): """ Property target used to get the rcp command. """ return self._rcpCommand def _setRshCommand(self, value): """ Property target to set the rsh command. The value must be a non-empty string or C{None}. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to L{util.executeCommand} via L{util.splitCommandLine}. However, all the caller will ever see via the property is the actual value they set (which includes seeing C{None}, even if we translate that internally to C{DEF_RSH_COMMAND}). Internally, we should always use C{self._rshCommandList} if we want the actual command list. @raise ValueError: If the value is an empty string. """ if value is None: self._rshCommand = None self._rshCommandList = DEF_RSH_COMMAND else: if len(value) >= 1: self._rshCommand = value self._rshCommandList = splitCommandLine(self._rshCommand) else: raise ValueError("The rsh command must be a non-empty string.") def _getRshCommand(self): """ Property target used to get the rsh command. """ return self._rshCommand def _setCbackCommand(self, value): """ Property target to set the cback command. The value must be a non-empty string or C{None}. Unlike the other command, this value is only stored in the "raw" form provided by the client. @raise ValueError: If the value is an empty string. """ if value is None: self._cbackCommand = None else: if len(value) >= 1: self._cbackCommand = value else: raise ValueError("The cback command must be a non-empty string.") def _getCbackCommand(self): """ Property target used to get the cback command. """ return self._cbackCommand def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer (a valid DNS hostname).") collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") workingDir = property(_getWorkingDir, _setWorkingDir, None, "Path to the peer's working directory (an absolute local path).") remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of the Cedar Backup user on the remote peer.") localUser = property(_getLocalUser, _setLocalUser, None, "Name of the Cedar Backup user on the current host.") rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "An rcp-compatible copy command to use for copying files.") rshCommand = property(_getRshCommand, _setRshCommand, None, "An rsh-compatible command to use for remote shells to the peer.") cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "A chack-compatible command to use for executing managed actions.") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ################# # Public methods ################# def stagePeer(self, targetDir, ownership=None, permissions=None): """ Stages data from the peer into the indicated local target directory. The target directory must already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied. @note: The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: Unlike the local peer version of this method, an I/O error might or might not be raised if the directory is empty. Since we're using a remote copy method, we just don't have the fine-grained control over our exceptions that's available when we can look directly at the filesystem, and we can't control whether the remote copy method thinks an empty directory is an error. @param targetDir: Target directory to write data into @type targetDir: String representing a directory on disk @param ownership: Owner and group that the staged files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If target directory is not a directory, does not exist or is not absolute. @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there were no files to stage (i.e. the directory was empty) @raise IOError: If there is an IO error copying a file. @raise OSError: If there is an OS error copying or changing permissions on a file """ targetDir = encodePath(targetDir) if not os.path.isabs(targetDir): logger.debug("Target directory [%s] not an absolute path.", targetDir) raise ValueError("Target directory must be an absolute path.") if not os.path.exists(targetDir) or not os.path.isdir(targetDir): logger.debug("Target directory [%s] is not a directory or does not exist on disk.", targetDir) raise ValueError("Target directory is not a directory or does not exist on disk.") count = RemotePeer._copyRemoteDir(self.remoteUser, self.localUser, self.name, self._rcpCommand, self._rcpCommandList, self.collectDir, targetDir, ownership, permissions) if count == 0: raise IOError("Did not copy any files from local peer.") return count def checkCollectIndicator(self, collectIndicator=None): """ Checks the collect indicator in the peer's staging directory. When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. If the remote copy command fails, we return C{False} as if the file weren't there. If you need to, you can override the name of the collect indicator file by passing in a different name. @note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the C{scp} command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. Because of this, the implementation of this method is rather convoluted. @param collectIndicator: Name of the collect indicator file to check @type collectIndicator: String representing name of a file in the collect directory @return: Boolean true/false depending on whether the indicator exists. @raise ValueError: If a path cannot be encoded properly. """ try: if collectIndicator is None: sourceFile = os.path.join(self.collectDir, DEF_COLLECT_INDICATOR) targetFile = os.path.join(self.workingDir, DEF_COLLECT_INDICATOR) else: collectIndicator = encodePath(collectIndicator) sourceFile = os.path.join(self.collectDir, collectIndicator) targetFile = os.path.join(self.workingDir, collectIndicator) logger.debug("Fetch remote [%s] into [%s].", sourceFile, targetFile) if os.path.exists(targetFile): try: os.remove(targetFile) except: raise Exception("Error: collect indicator [%s] already exists!" % targetFile) try: RemotePeer._copyRemoteFile(self.remoteUser, self.localUser, self.name, self._rcpCommand, self._rcpCommandList, sourceFile, targetFile, overwrite=False) if os.path.exists(targetFile): return True else: return False except Exception, e: logger.info("Failed looking for collect indicator: %s", e) return False finally: if os.path.exists(targetFile): try: os.remove(targetFile) except: pass def writeStageIndicator(self, stageIndicator=None): """ Writes the stage indicator in the peer's staging directory. When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete. If you need to, you can override the name of the stage indicator file by passing in a different name. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param stageIndicator: Name of the indicator file to write @type stageIndicator: String representing name of a file in the collect directory @raise ValueError: If a path cannot be encoded properly. @raise IOError: If there is an IO error creating the file. @raise OSError: If there is an OS error creating or changing permissions on the file """ stageIndicator = encodePath(stageIndicator) if stageIndicator is None: sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) targetFile = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) else: sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) targetFile = os.path.join(self.collectDir, stageIndicator) try: if not os.path.exists(sourceFile): open(sourceFile, "w").write("") RemotePeer._pushLocalFile(self.remoteUser, self.localUser, self.name, self._rcpCommand, self._rcpCommandList, sourceFile, targetFile) finally: if os.path.exists(sourceFile): try: os.remove(sourceFile) except: pass def executeRemoteCommand(self, command): """ Executes a command on the peer via remote shell. @param command: Command to execute @type command: String command-line suitable for use with rsh. @raise IOError: If there is an error executing the command on the remote peer. """ RemotePeer._executeRemoteCommand(self.remoteUser, self.localUser, self.name, self._rshCommand, self._rshCommandList, command) def executeManagedAction(self, action, fullBackup): """ Executes a managed action on this peer. @param action: Name of the action to execute. @param fullBackup: Whether a full backup should be executed. @raise IOError: If there is an error executing the action on the remote peer. """ try: command = RemotePeer._buildCbackCommand(self.cbackCommand, action, fullBackup) self.executeRemoteCommand(command) except IOError, e: logger.info(e) raise IOError("Failed to execute action [%s] on managed client [%s]." % (action, self.name)) ################## # Private methods ################## @staticmethod def _getDirContents(path): """ Returns the contents of a directory in terms of a Set. The directory's contents are read as a L{FilesystemList} containing only files, and then the list is converted into a set object for later use. @param path: Directory path to get contents for @type path: String representing a path on disk @return: Set of files in the directory @raise ValueError: If path is not a directory or does not exist. """ contents = FilesystemList() contents.excludeDirs = True contents.excludeLinks = True contents.addDirContents(path) try: return set(contents) except: import sets return sets.Set(contents) @staticmethod def _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceDir, targetDir, ownership=None, permissions=None): """ Copies files from the source directory to the target directory. This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. Behavior when copying soft links from the collect directory is dependent on the behavior of the specified rcp command. @note: The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it. @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: We don't have a good way of knowing exactly what files we copied down from the remote peer, unless we want to parse the output of the rcp command (ugh). We could change permissions on everything in the target directory, but that's kind of ugly too. Instead, we use Python's set functionality to figure out what files were added while we executed the rcp command. This isn't perfect - for instance, it's not correct if someone else is messing with the directory at the same time we're doing the remote copy - but it's about as good as we're going to get. @note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the C{scp} command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing C{IOError} if we don't copy any files from the remote host. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via the copy command @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rcpCommandList: An rcp-compatible copy command to use for copying files @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} @param sourceDir: Source directory @type sourceDir: String representing a directory on disk @param targetDir: Target directory @type targetDir: String representing a directory on disk @param ownership: Owner and group that the copied files should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @return: Number of files copied from the source directory to the target directory. @raise ValueError: If source or target is not a directory or does not exist. @raise IOError: If there is an IO error copying the files. """ beforeSet = RemotePeer._getDirContents(targetDir) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote copy as another user.") except AttributeError: pass actualCommand = "%s %s@%s:%s/* %s" % (rcpCommand, remoteUser, remoteHost, sourceDir, targetDir) command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error (%d) copying files from remote host as local user [%s]." % (result, localUser)) else: copySource = "%s@%s:%s/*" % (remoteUser, remoteHost, sourceDir) command = resolveCommand(rcpCommandList) result = executeCommand(command, [copySource, targetDir])[0] if result != 0: raise IOError("Error (%d) copying files from remote host." % result) afterSet = RemotePeer._getDirContents(targetDir) if len(afterSet) == 0: raise IOError("Did not copy any files from remote peer.") differenceSet = afterSet.difference(beforeSet) # files we added as part of copy if len(differenceSet) == 0: raise IOError("Apparently did not copy any new files from remote peer.") for targetFile in differenceSet: if ownership is not None: os.chown(targetFile, ownership[0], ownership[1]) if permissions is not None: os.chmod(targetFile, permissions) return len(differenceSet) @staticmethod def _copyRemoteFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, ownership=None, permissions=None, overwrite=True): """ Copies a remote source file to a target file. @note: Internally, we have to go through and escape any spaces in the source path with double-backslash, otherwise things get screwed up. It doesn't seem to be required in the target path. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH). @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @note: We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception. @note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the C{scp} command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing C{IOError} the target file does not exist when we're done. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via the copy command @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rcpCommandList: An rcp-compatible copy command to use for copying files @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} @param sourceFile: Source file to copy @type sourceFile: String representing a file on disk, as an absolute path @param targetFile: Target file to create @type targetFile: String representing a file on disk, as an absolute path @param ownership: Owner and group that the copied should have @type ownership: Tuple of numeric ids C{(uid, gid)} @param permissions: Permissions that the staged files should have @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). @param overwrite: Indicates whether it's OK to overwrite the target file. @type overwrite: Boolean true/false. @raise IOError: If the target file already exists. @raise IOError: If there is an IO error copying the file @raise OSError: If there is an OS error changing permissions on the file """ if not overwrite: if os.path.exists(targetFile): raise IOError("Target file [%s] already exists." % targetFile) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote copy as another user.") except AttributeError: pass actualCommand = "%s %s@%s:%s %s" % (rcpCommand, remoteUser, remoteHost, sourceFile.replace(" ", "\\ "), targetFile) command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error (%d) copying [%s] from remote host as local user [%s]." % (result, sourceFile, localUser)) else: copySource = "%s@%s:%s" % (remoteUser, remoteHost, sourceFile.replace(" ", "\\ ")) command = resolveCommand(rcpCommandList) result = executeCommand(command, [copySource, targetFile])[0] if result != 0: raise IOError("Error (%d) copying [%s] from remote host." % (result, sourceFile)) if not os.path.exists(targetFile): raise IOError("Apparently unable to copy file from remote host.") if ownership is not None: os.chown(targetFile, ownership[0], ownership[1]) if permissions is not None: os.chmod(targetFile, permissions) @staticmethod def _pushLocalFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, overwrite=True): """ Copies a local source file to a remote host. @note: We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception. @note: Internally, we have to go through and escape any spaces in the source and target paths with double-backslash, otherwise things get screwed up. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH). @note: If you have user/group as strings, call the L{util.getUidGid} function to get the associated uid/gid as an ownership tuple. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid via the copy command @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer @type rcpCommand: String representing a system command including required arguments @param rcpCommandList: An rcp-compatible copy command to use for copying files @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} @param sourceFile: Source file to copy @type sourceFile: String representing a file on disk, as an absolute path @param targetFile: Target file to create @type targetFile: String representing a file on disk, as an absolute path @param overwrite: Indicates whether it's OK to overwrite the target file. @type overwrite: Boolean true/false. @raise IOError: If there is an IO error copying the file @raise OSError: If there is an OS error changing permissions on the file """ if not overwrite: if os.path.exists(targetFile): raise IOError("Target file [%s] already exists." % targetFile) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote copy as another user.") except AttributeError: pass actualCommand = '%s "%s" "%s@%s:%s"' % (rcpCommand, sourceFile, remoteUser, remoteHost, targetFile) command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error (%d) copying [%s] to remote host as local user [%s]." % (result, sourceFile, localUser)) else: copyTarget = "%s@%s:%s" % (remoteUser, remoteHost, targetFile.replace(" ", "\\ ")) command = resolveCommand(rcpCommandList) result = executeCommand(command, [sourceFile.replace(" ", "\\ "), copyTarget])[0] if result != 0: raise IOError("Error (%d) copying [%s] to remote host." % (result, sourceFile)) @staticmethod def _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand): """ Executes a command on the peer via remote shell. @param remoteUser: Name of the Cedar Backup user on the remote peer @type remoteUser: String representing a username, valid on the remote host @param localUser: Name of the Cedar Backup user on the current host @type localUser: String representing a username, valid on the current host @param remoteHost: Hostname of the remote peer @type remoteHost: String representing a hostname, accessible via the copy command @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer @type rshCommand: String representing a system command including required arguments @param rshCommandList: An rsh-compatible copy command to use for remote shells to the peer @type rshCommandList: Command as a list to be passed to L{util.executeCommand} @param remoteCommand: The command to be executed on the remote host @type remoteCommand: String command-line, with no special shell characters ($, <, etc.) @raise IOError: If there is an error executing the remote command """ actualCommand = "%s %s@%s '%s'" % (rshCommand, remoteUser, remoteHost, remoteCommand) if localUser is not None: try: if not isRunningAsRoot(): raise IOError("Only root can remote shell as another user.") except AttributeError: pass command = resolveCommand(SU_COMMAND) result = executeCommand(command, [localUser, "-c", actualCommand])[0] if result != 0: raise IOError("Command failed [su -c %s \"%s\"]" % (localUser, actualCommand)) else: command = resolveCommand(rshCommandList) result = executeCommand(command, ["%s@%s" % (remoteUser, remoteHost), "%s" % remoteCommand])[0] if result != 0: raise IOError("Command failed [%s]" % (actualCommand)) @staticmethod def _buildCbackCommand(cbackCommand, action, fullBackup): """ Builds a Cedar Backup command line for the named action. @note: If the cback command is None, then DEF_CBACK_COMMAND is used. @param cbackCommand: cback command to execute, including required options @param action: Name of the action to execute. @param fullBackup: Whether a full backup should be executed. @return: String suitable for passing to L{_executeRemoteCommand} as remoteCommand. @raise ValueError: If action is None. """ if action is None: raise ValueError("Action cannot be None.") if cbackCommand is None: cbackCommand = DEF_CBACK_COMMAND if fullBackup: return "%s --full %s" % (cbackCommand, action) else: return "%s %s" % (cbackCommand, action) CedarBackup2-2.26.5/CedarBackup2/xmlutil.py0000664000175000017500000006410612560016766022074 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2006,2010 Kenneth J. Pronovici. # All rights reserved. # # Portions Copyright (c) 2000 Fourthought Inc, USA. # All Rights Reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides general XML-related functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides general XML-related functionality. What I'm trying to do here is abstract much of the functionality that directly accesses the DOM tree. This is not so much to "protect" the other code from the DOM, but to standardize the way it's used. It will also help extension authors write code that easily looks more like the rest of Cedar Backup. @sort: createInputDom, createOutputDom, serializeDom, isElement, readChildren, readFirstChild, readStringList, readString, readInteger, readBoolean, addContainerNode, addStringNode, addIntegerNode, addBooleanNode, TRUE_BOOLEAN_VALUES, FALSE_BOOLEAN_VALUES, VALID_BOOLEAN_VALUES @var TRUE_BOOLEAN_VALUES: List of boolean values in XML representing C{True}. @var FALSE_BOOLEAN_VALUES: List of boolean values in XML representing C{False}. @var VALID_BOOLEAN_VALUES: List of valid boolean values in XML. @author: Kenneth J. Pronovici """ # pylint: disable=C0111,C0103,W0511,W0104,W0106 ######################################################################## # Imported modules ######################################################################## # System modules import sys import re import logging import codecs from types import UnicodeType from StringIO import StringIO # XML-related modules from xml.parsers.expat import ExpatError from xml.dom.minidom import Node from xml.dom.minidom import getDOMImplementation from xml.dom.minidom import parseString ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.xml") TRUE_BOOLEAN_VALUES = [ "Y", "y", ] FALSE_BOOLEAN_VALUES = [ "N", "n", ] VALID_BOOLEAN_VALUES = TRUE_BOOLEAN_VALUES + FALSE_BOOLEAN_VALUES ######################################################################## # Functions for creating and parsing DOM trees ######################################################################## def createInputDom(xmlData, name="cb_config"): """ Creates a DOM tree based on reading an XML string. @param name: Assumed base name of the document (root node name). @return: Tuple (xmlDom, parentNode) for the parsed document @raise ValueError: If the document can't be parsed. """ try: xmlDom = parseString(xmlData) parentNode = readFirstChild(xmlDom, name) return (xmlDom, parentNode) except (IOError, ExpatError), e: raise ValueError("Unable to parse XML document: %s" % e) def createOutputDom(name="cb_config"): """ Creates a DOM tree used for writing an XML document. @param name: Base name of the document (root node name). @return: Tuple (xmlDom, parentNode) for the new document """ impl = getDOMImplementation() xmlDom = impl.createDocument(None, name, None) return (xmlDom, xmlDom.documentElement) ######################################################################## # Functions for reading values out of XML documents ######################################################################## def isElement(node): """ Returns True or False depending on whether the XML node is an element node. """ return node.nodeType == Node.ELEMENT_NODE def readChildren(parent, name): """ Returns a list of nodes with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. Underneath, we use the Python C{getElementsByTagName} method, which is pretty cool, but which (surprisingly?) returns a list of all children with a given name below the parent, at any level. We just prune that list to include only children whose C{parentNode} matches the passed-in parent. @param parent: Parent node to search beneath. @param name: Name of nodes to search for. @return: List of child nodes with correct parent, or an empty list if no matching nodes are found. """ lst = [] if parent is not None: result = parent.getElementsByTagName(name) for entry in result: if entry.parentNode is parent: lst.append(entry) return lst def readFirstChild(parent, name): """ Returns the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: First properly-named child of parent, or C{None} if no matching nodes are found. """ result = readChildren(parent, name) if result is None or result == []: return None return result[0] def readStringList(parent, name): """ Returns a list of the string contents associated with nodes with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. First, we find all of the nodes using L{readChildren}, and then we retrieve the "string contents" of each of those nodes. The returned list has one entry per matching node. We assume that string contents of a given node belong to the first C{TEXT_NODE} child of that node. Nodes which have no C{TEXT_NODE} children are not represented in the returned list. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: List of strings as described above, or C{None} if no matching nodes are found. """ lst = [] result = readChildren(parent, name) for entry in result: if entry.hasChildNodes(): for child in entry.childNodes: if child.nodeType == Node.TEXT_NODE: lst.append(child.nodeValue) break if lst == []: lst = None return lst def readString(parent, name): """ Returns string contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. We assume that string contents of a given node belong to the first C{TEXT_NODE} child of that node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: String contents of node or C{None} if no matching nodes are found. """ result = readStringList(parent, name) if result is None: return None return result[0] def readInteger(parent, name): """ Returns integer contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Integer contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to an integer. """ result = readString(parent, name) if result is None: return None else: return int(result) def readLong(parent, name): """ Returns long integer contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Long integer contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to an integer. """ result = readString(parent, name) if result is None: return None else: return long(result) def readFloat(parent, name): """ Returns float contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Float contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to a float value. """ result = readString(parent, name) if result is None: return None else: return float(result) def readBoolean(parent, name): """ Returns boolean contents of the first child with a given name immediately beneath the parent. By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. The string value of the node must be one of the values in L{VALID_BOOLEAN_VALUES}. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Boolean contents of node or C{None} if no matching nodes are found. @raise ValueError: If the string at the location can't be converted to a boolean. """ result = readString(parent, name) if result is None: return None else: if result in TRUE_BOOLEAN_VALUES: return True elif result in FALSE_BOOLEAN_VALUES: return False else: raise ValueError("Boolean values must be one of %s." % VALID_BOOLEAN_VALUES) ######################################################################## # Functions for writing values into XML documents ######################################################################## def addContainerNode(xmlDom, parentNode, nodeName): """ Adds a container node as the next child of a parent node. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @return: Reference to the newly-created node. """ containerNode = xmlDom.createElement(nodeName) parentNode.appendChild(containerNode) return containerNode def addStringNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain a string. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ containerNode = addContainerNode(xmlDom, parentNode, nodeName) if nodeValue is not None: textNode = xmlDom.createTextNode(nodeValue) containerNode.appendChild(textNode) return containerNode def addIntegerNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain an integer. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). The integer will be converted to a string using "%d". The result will be added to the document via L{addStringNode}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ if nodeValue is None: return addStringNode(xmlDom, parentNode, nodeName, None) else: return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long def addLongNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain a long integer. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). The integer will be converted to a string using "%d". The result will be added to the document via L{addStringNode}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ if nodeValue is None: return addStringNode(xmlDom, parentNode, nodeName, None) else: return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long def addBooleanNode(xmlDom, parentNode, nodeName, nodeValue): """ Adds a text node as the next child of a parent, to contain a boolean. If the C{nodeValue} is None, then the node will be created, but will be empty (i.e. will contain no text node child). Boolean C{True}, or anything else interpreted as C{True} by Python, will be converted to a string "Y". Anything else will be converted to a string "N". The result is added to the document via L{addStringNode}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param nodeValue: The value to put into the node. @return: Reference to the newly-created node. """ if nodeValue is None: return addStringNode(xmlDom, parentNode, nodeName, None) else: if nodeValue: return addStringNode(xmlDom, parentNode, nodeName, "Y") else: return addStringNode(xmlDom, parentNode, nodeName, "N") ######################################################################## # Functions for serializing DOM trees ######################################################################## def serializeDom(xmlDom, indent=3): """ Serializes a DOM tree and returns the result in a string. @param xmlDom: XML DOM tree to serialize @param indent: Number of spaces to indent, as an integer @return: String form of DOM tree, pretty-printed. """ xmlBuffer = StringIO() serializer = Serializer(xmlBuffer, "UTF-8", indent=indent) serializer.serialize(xmlDom) xmlData = xmlBuffer.getvalue() xmlBuffer.close() return xmlData class Serializer(object): """ XML serializer class. This is a customized serializer that I hacked together based on what I found in the PyXML distribution. Basically, around release 2.7.0, the only reason I still had around a dependency on PyXML was for the PrettyPrint functionality, and that seemed pointless. So, I stripped the PrettyPrint code out of PyXML and hacked bits of it off until it did just what I needed and no more. This code started out being called PrintVisitor, but I decided it makes more sense just calling it a serializer. I've made nearly all of the methods private, and I've added a new high-level serialize() method rather than having clients call C{visit()}. Anyway, as a consequence of my hacking with it, this can't quite be called a complete XML serializer any more. I ripped out support for HTML and XHTML, and there is also no longer any support for namespaces (which I took out because this dragged along a lot of extra code, and Cedar Backup doesn't use namespaces). However, everything else should pretty much work as expected. @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ def __init__(self, stream=sys.stdout, encoding="UTF-8", indent=3): """ Initialize a serializer. @param stream: Stream to write output to. @param encoding: Output encoding. @param indent: Number of spaces to indent, as an integer """ self.stream = stream self.encoding = encoding self._indent = indent * " " self._depth = 0 self._inText = 0 def serialize(self, xmlDom): """ Serialize the passed-in XML document. @param xmlDom: XML DOM tree to serialize @raise ValueError: If there's an unknown node type in the document. """ self._visit(xmlDom) self.stream.write("\n") def _write(self, text): obj = _encodeText(text, self.encoding) self.stream.write(obj) return def _tryIndent(self): if not self._inText and self._indent: self._write('\n' + self._indent*self._depth) return def _visit(self, node): """ @raise ValueError: If there's an unknown node type in the document. """ if node.nodeType == Node.ELEMENT_NODE: return self._visitElement(node) elif node.nodeType == Node.ATTRIBUTE_NODE: return self._visitAttr(node) elif node.nodeType == Node.TEXT_NODE: return self._visitText(node) elif node.nodeType == Node.CDATA_SECTION_NODE: return self._visitCDATASection(node) elif node.nodeType == Node.ENTITY_REFERENCE_NODE: return self._visitEntityReference(node) elif node.nodeType == Node.ENTITY_NODE: return self._visitEntity(node) elif node.nodeType == Node.PROCESSING_INSTRUCTION_NODE: return self._visitProcessingInstruction(node) elif node.nodeType == Node.COMMENT_NODE: return self._visitComment(node) elif node.nodeType == Node.DOCUMENT_NODE: return self._visitDocument(node) elif node.nodeType == Node.DOCUMENT_TYPE_NODE: return self._visitDocumentType(node) elif node.nodeType == Node.DOCUMENT_FRAGMENT_NODE: return self._visitDocumentFragment(node) elif node.nodeType == Node.NOTATION_NODE: return self._visitNotation(node) # It has a node type, but we don't know how to handle it raise ValueError("Unknown node type: %s" % repr(node)) def _visitNodeList(self, node, exclude=None): for curr in node: curr is not exclude and self._visit(curr) return def _visitNamedNodeMap(self, node): for item in node.values(): self._visit(item) return def _visitAttr(self, node): self._write(' ' + node.name) value = node.value text = _translateCDATA(value, self.encoding) text, delimiter = _translateCDATAAttr(text) self.stream.write("=%s%s%s" % (delimiter, text, delimiter)) return def _visitProlog(self): self._write("" % (self.encoding or 'utf-8')) self._inText = 0 return def _visitDocument(self, node): self._visitProlog() node.doctype and self._visitDocumentType(node.doctype) self._visitNodeList(node.childNodes, exclude=node.doctype) return def _visitDocumentFragment(self, node): self._visitNodeList(node.childNodes) return def _visitElement(self, node): self._tryIndent() self._write('<%s' % node.tagName) for attr in node.attributes.values(): self._visitAttr(attr) if len(node.childNodes): self._write('>') self._depth = self._depth + 1 self._visitNodeList(node.childNodes) self._depth = self._depth - 1 not (self._inText) and self._tryIndent() self._write('' % node.tagName) else: self._write('/>') self._inText = 0 return def _visitText(self, node): text = node.data if self._indent: text.strip() if text: text = _translateCDATA(text, self.encoding) self.stream.write(text) self._inText = 1 return def _visitDocumentType(self, doctype): if not doctype.systemId and not doctype.publicId: return self._tryIndent() self._write(' | | | # [a-zA-Z0-9] | [-'()+,./:=?;!*#@$_%] public = "'%s'" % doctype.publicId else: public = '"%s"' % doctype.publicId if doctype.publicId and doctype.systemId: self._write(' PUBLIC %s %s' % (public, system)) elif doctype.systemId: self._write(' SYSTEM %s' % system) if doctype.entities or doctype.notations: self._write(' [') self._depth = self._depth + 1 self._visitNamedNodeMap(doctype.entities) self._visitNamedNodeMap(doctype.notations) self._depth = self._depth - 1 self._tryIndent() self._write(']>') else: self._write('>') self._inText = 0 return def _visitEntity(self, node): """Visited from a NamedNodeMap in DocumentType""" self._tryIndent() self._write('') return def _visitNotation(self, node): """Visited from a NamedNodeMap in DocumentType""" self._tryIndent() self._write('') return def _visitCDATASection(self, node): self._tryIndent() self._write('' % (node.data)) self._inText = 0 return def _visitComment(self, node): self._tryIndent() self._write('' % (node.data)) self._inText = 0 return def _visitEntityReference(self, node): self._write('&%s;' % node.nodeName) self._inText = 1 return def _visitProcessingInstruction(self, node): self._tryIndent() self._write('' % (node.target, node.data)) self._inText = 0 return def _encodeText(text, encoding): """ @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was attributed to Martin v. Lwis and was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ encoder = codecs.lookup(encoding)[0] # encode,decode,reader,writer if not isinstance(text, UnicodeType): text = unicode(text, "utf-8") return encoder(text)[0] # result,size def _translateCDATAAttr(characters): """ Handles normalization and some intelligence about quoting. @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ if not characters: return '', "'" if "'" in characters: delimiter = '"' new_chars = re.sub('"', '"', characters) else: delimiter = "'" new_chars = re.sub("'", ''', characters) #FIXME: There's more to normalization #Convert attribute new-lines to character entity # characters is possibly shorter than new_chars (no entities) if "\n" in characters: new_chars = re.sub('\n', ' ', new_chars) return new_chars, delimiter #Note: Unicode object only for now def _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0): """ @copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. """ CDATA_CHAR_PATTERN = re.compile('[&<]|]]>') CHAR_TO_ENTITY = { '&': '&', '<': '<', ']]>': ']]>', } ILLEGAL_LOW_CHARS = '[\x01-\x08\x0B-\x0C\x0E-\x1F]' ILLEGAL_HIGH_CHARS = '\xEF\xBF[\xBE\xBF]' XML_ILLEGAL_CHAR_PATTERN = re.compile('%s|%s'%(ILLEGAL_LOW_CHARS, ILLEGAL_HIGH_CHARS)) if not characters: return '' if not markupSafe: if CDATA_CHAR_PATTERN.search(characters): new_string = CDATA_CHAR_PATTERN.subn(lambda m, d=CHAR_TO_ENTITY: d[m.group()], characters)[0] else: new_string = characters if prev_chars[-2:] == ']]' and characters[0] == '>': new_string = '>' + new_string[1:] else: new_string = characters #Note: use decimal char entity rep because some browsers are broken #FIXME: This will bomb for high characters. Should, for instance, detect #The UTF-8 for 0xFFFE and put out ￾ if XML_ILLEGAL_CHAR_PATTERN.search(new_string): new_string = XML_ILLEGAL_CHAR_PATTERN.subn(lambda m: '&#%i;' % ord(m.group()), new_string)[0] new_string = _encodeText(new_string, encoding) return new_string CedarBackup2-2.26.5/CedarBackup2/extend/0002775000175000017500000000000012642035650021300 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/CedarBackup2/extend/mbox.py0000664000175000017500000015335712560016766022641 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to back up mbox email files. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up mbox email files. Backing up email ================ Email folders (often stored as mbox flatfiles) are not well-suited being backed up with an incremental backup like the one offered by Cedar Backup. This is because mbox files often change on a daily basis, forcing the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large folders. (Note that the alternative maildir format does not share this problem, since it typically uses one file per message.) One solution to this problem is to design a smarter incremental backup process, which backs up baseline content on the first day of the week, and then backs up only new messages added to that folder on every other day of the week. This way, the backup for any single day is only as large as the messages placed into the folder on that day. The backup isn't as "perfect" as the incremental backup process, because it doesn't preserve information about messages deleted from the backed-up folder. However, it should be much more space-efficient, and in a recovery situation, it seems better to restore too much data rather than too little. What is this extension? ======================= This is a Cedar Backup extension used to back up mbox email files via the Cedar Backup command line. Individual mbox files or directories containing mbox files can be backed up using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental. It implements the "smart" incremental backup process discussed above, using functionality provided by the C{grepmail} utility. This extension requires a new configuration section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. The mbox action is conceptually similar to the standard collect action, except that mbox directories are not collected recursively. This implies some configuration changes (i.e. there's no need for global exclusions or an ignore file). If you back up a directory, all of the mbox files in that directory are backed up into a single tar file using the indicated compression method. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import datetime import pickle import tempfile from bz2 import BZ2File from gzip import GzipFile # Cedar Backup modules from CedarBackup2.filesystem import FilesystemList, BackupFileList from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList from CedarBackup2.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES from CedarBackup2.util import isStartOfWeek, buildNormalizedPath from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import ObjectTypeList, UnorderedList, RegexList, encodePath, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.mbox") GREPMAIL_COMMAND = [ "grepmail", ] REVISION_PATH_EXTENSION = "mboxlast" ######################################################################## # MboxFile class definition ######################################################################## class MboxFile(object): """ Class representing mbox file configuration.. The following restrictions exist on data in this class: - The absolute path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, compressMode """ def __init__(self, absolutePath=None, collectMode=None, compressMode=None): """ Constructor for the C{MboxFile} class. You should never directly instantiate this class. @param absolutePath: Absolute path to an mbox file on disk. @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. """ self._absolutePath = None self._collectMode = None self._compressMode = None self.absolutePath = absolutePath self.collectMode = collectMode self.compressMode = compressMode def __repr__(self): """ Official string representation for class instance. """ return "MboxFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Absolute path must be, er, an absolute path.") self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox file.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox file.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox file.") ######################################################################## # MboxDir class definition ######################################################################## class MboxDir(object): """ Class representing mbox directory configuration.. The following restrictions exist on data in this class: - The absolute path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. Unlike collect directory configuration, this is the only place exclusions are allowed (no global exclusions at the configuration level). Also, we only allow relative exclusions and there is no configured ignore file. This is because mbox directory backups are not recursive. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, compressMode, relativeExcludePaths, excludePatterns """ def __init__(self, absolutePath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None): """ Constructor for the C{MboxDir} class. You should never directly instantiate this class. @param absolutePath: Absolute path to a mbox file on disk. @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. @param relativeExcludePaths: List of relative paths to exclude. @param excludePatterns: List of regular expression patterns to exclude """ self._absolutePath = None self._collectMode = None self._compressMode = None self._relativeExcludePaths = None self._excludePatterns = None self.absolutePath = absolutePath self.collectMode = collectMode self.compressMode = compressMode self.relativeExcludePaths = relativeExcludePaths self.excludePatterns = excludePatterns def __repr__(self): """ Official string representation for class instance. """ return "MboxDir(%s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode, self.relativeExcludePaths, self.excludePatterns) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.relativeExcludePaths != other.relativeExcludePaths: if self.relativeExcludePaths < other.relativeExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Absolute path must be, er, an absolute path.") self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setRelativeExcludePaths(self, value): """ Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._relativeExcludePaths = None else: try: saved = self._relativeExcludePaths self._relativeExcludePaths = UnorderedList() self._relativeExcludePaths.extend(value) except Exception, e: self._relativeExcludePaths = saved raise e def _getRelativeExcludePaths(self): """ Property target used to get the relative exclude paths list. """ return self._relativeExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception, e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox directory.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox directory.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox directory.") relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.") ######################################################################## # MboxConfig class definition ######################################################################## class MboxConfig(object): """ Class representing mbox configuration. Mbox configuration is used for backing up mbox email files. The following restrictions exist on data in this class: - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The C{mboxFiles} list must be a list of C{MboxFile} objects - The C{mboxDirs} list must be a list of C{MboxDir} objects For the C{mboxFiles} and C{mboxDirs} lists, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element is of the proper type. Unlike collect configuration, no global exclusions are allowed on this level. We only allow relative exclusions at the mbox directory level. Also, there is no configured ignore file. This is because mbox directory backups are not recursive. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, collectMode, compressMode, mboxFiles, mboxDirs """ def __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None): """ Constructor for the C{MboxConfig} class. @param collectMode: Default collect mode. @param compressMode: Default compress mode. @param mboxFiles: List of mbox files to back up @param mboxDirs: List of mbox directories to back up @raise ValueError: If one of the values is invalid. """ self._collectMode = None self._compressMode = None self._mboxFiles = None self._mboxDirs = None self.collectMode = collectMode self.compressMode = compressMode self.mboxFiles = mboxFiles self.mboxDirs = mboxDirs def __repr__(self): """ Official string representation for class instance. """ return "MboxConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.mboxFiles, self.mboxDirs) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.mboxFiles != other.mboxFiles: if self.mboxFiles < other.mboxFiles: return -1 else: return 1 if self.mboxDirs != other.mboxDirs: if self.mboxDirs < other.mboxDirs: return -1 else: return 1 return 0 def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setMboxFiles(self, value): """ Property target used to set the mboxFiles list. Either the value must be C{None} or each element must be an C{MboxFile}. @raise ValueError: If the value is not an C{MboxFile} """ if value is None: self._mboxFiles = None else: try: saved = self._mboxFiles self._mboxFiles = ObjectTypeList(MboxFile, "MboxFile") self._mboxFiles.extend(value) except Exception, e: self._mboxFiles = saved raise e def _getMboxFiles(self): """ Property target used to get the mboxFiles list. """ return self._mboxFiles def _setMboxDirs(self, value): """ Property target used to set the mboxDirs list. Either the value must be C{None} or each element must be an C{MboxDir}. @raise ValueError: If the value is not an C{MboxDir} """ if value is None: self._mboxDirs = None else: try: saved = self._mboxDirs self._mboxDirs = ObjectTypeList(MboxDir, "MboxDir") self._mboxDirs.extend(value) except Exception, e: self._mboxDirs = saved raise e def _getMboxDirs(self): """ Property target used to get the mboxDirs list. """ return self._mboxDirs collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") mboxFiles = property(_getMboxFiles, _setMboxFiles, None, doc="List of mbox files to back up.") mboxDirs = property(_getMboxDirs, _setMboxDirs, None, doc="List of mbox directories to back up.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Mbox-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, mbox, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._mbox = None self.mbox = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.mbox) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.mbox != other.mbox: if self.mbox < other.mbox: return -1 else: return 1 return 0 def _setMbox(self, value): """ Property target used to set the mbox configuration value. If not C{None}, the value must be a C{MboxConfig} object. @raise ValueError: If the value is not a C{MboxConfig} """ if value is None: self._mbox = None else: if not isinstance(value, MboxConfig): raise ValueError("Value must be a C{MboxConfig} object.") self._mbox = value def _getMbox(self): """ Property target used to get the mbox configuration value. """ return self._mbox mbox = property(_getMbox, _setMbox, None, "Mbox configuration in terms of a C{MboxConfig} object.") def validate(self): """ Validates configuration represented by the object. Mbox configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry. Each configured file or directory must contain an absolute path, and then must be either able to take collect mode and compress mode configuration from the parent C{MboxConfig} object, or must set each value on its own. @raise ValueError: If one of the validations fails. """ if self.mbox is None: raise ValueError("Mbox section is required.") if (self.mbox.mboxFiles is None or len(self.mbox.mboxFiles) < 1) and \ (self.mbox.mboxDirs is None or len(self.mbox.mboxDirs) < 1): raise ValueError("At least one mbox file or directory must be configured.") if self.mbox.mboxFiles is not None: for mboxFile in self.mbox.mboxFiles: if mboxFile.absolutePath is None: raise ValueError("Each mbox file must set an absolute path.") if self.mbox.collectMode is None and mboxFile.collectMode is None: raise ValueError("Collect mode must either be set in parent mbox section or individual mbox file.") if self.mbox.compressMode is None and mboxFile.compressMode is None: raise ValueError("Compress mode must either be set in parent mbox section or individual mbox file.") if self.mbox.mboxDirs is not None: for mboxDir in self.mbox.mboxDirs: if mboxDir.absolutePath is None: raise ValueError("Each mbox directory must set an absolute path.") if self.mbox.collectMode is None and mboxDir.collectMode is None: raise ValueError("Collect mode must either be set in parent mbox section or individual mbox directory.") if self.mbox.compressMode is None and mboxDir.compressMode is None: raise ValueError("Compress mode must either be set in parent mbox section or individual mbox directory.") def addConfig(self, xmlDom, parentNode): """ Adds an configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: collectMode //cb_config/mbox/collectMode compressMode //cb_config/mbox/compressMode We also add groups of the following items, one list element per item:: mboxFiles //cb_config/mbox/file mboxDirs //cb_config/mbox/dir The mbox files and mbox directories are added by L{_addMboxFile} and L{_addMboxDir}. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.mbox is not None: sectionNode = addContainerNode(xmlDom, parentNode, "mbox") addStringNode(xmlDom, sectionNode, "collect_mode", self.mbox.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", self.mbox.compressMode) if self.mbox.mboxFiles is not None: for mboxFile in self.mbox.mboxFiles: LocalConfig._addMboxFile(xmlDom, sectionNode, mboxFile) if self.mbox.mboxDirs is not None: for mboxDir in self.mbox.mboxDirs: LocalConfig._addMboxDir(xmlDom, sectionNode, mboxDir) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the mbox configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._mbox = LocalConfig._parseMbox(parentNode) @staticmethod def _parseMbox(parent): """ Parses an mbox configuration section. We read the following individual fields:: collectMode //cb_config/mbox/collect_mode compressMode //cb_config/mbox/compress_mode We also read groups of the following item, one list element per item:: mboxFiles //cb_config/mbox/file mboxDirs //cb_config/mbox/dir The mbox files are parsed by L{_parseMboxFiles} and the mbox directories are parsed by L{_parseMboxDirs}. @param parent: Parent node to search beneath. @return: C{MboxConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ mbox = None section = readFirstChild(parent, "mbox") if section is not None: mbox = MboxConfig() mbox.collectMode = readString(section, "collect_mode") mbox.compressMode = readString(section, "compress_mode") mbox.mboxFiles = LocalConfig._parseMboxFiles(section) mbox.mboxDirs = LocalConfig._parseMboxDirs(section) return mbox @staticmethod def _parseMboxFiles(parent): """ Reads a list of C{MboxFile} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode collect_mode compressMode compess_mode @param parent: Parent node to search beneath. @return: List of C{MboxFile} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "file"): if isElement(entry): mboxFile = MboxFile() mboxFile.absolutePath = readString(entry, "abs_path") mboxFile.collectMode = readString(entry, "collect_mode") mboxFile.compressMode = readString(entry, "compress_mode") lst.append(mboxFile) if lst == []: lst = None return lst @staticmethod def _parseMboxDirs(parent): """ Reads a list of C{MboxDir} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode collect_mode compressMode compess_mode We also read groups of the following items, one list element per item:: relativeExcludePaths exclude/rel_path excludePatterns exclude/pattern The exclusions are parsed by L{_parseExclusions}. @param parent: Parent node to search beneath. @return: List of C{MboxDir} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "dir"): if isElement(entry): mboxDir = MboxDir() mboxDir.absolutePath = readString(entry, "abs_path") mboxDir.collectMode = readString(entry, "collect_mode") mboxDir.compressMode = readString(entry, "compress_mode") (mboxDir.relativeExcludePaths, mboxDir.excludePatterns) = LocalConfig._parseExclusions(entry) lst.append(mboxDir) if lst == []: lst = None return lst @staticmethod def _parseExclusions(parentNode): """ Reads exclusions data from immediately beneath the parent. We read groups of the following items, one list element per item:: relative exclude/rel_path patterns exclude/pattern If there are none of some pattern (i.e. no relative path items) then C{None} will be returned for that item in the tuple. @param parentNode: Parent node to search beneath. @return: Tuple of (relative, patterns) exclusions. """ section = readFirstChild(parentNode, "exclude") if section is None: return (None, None) else: relative = readStringList(section, "rel_path") patterns = readStringList(section, "pattern") return (relative, patterns) @staticmethod def _addMboxFile(xmlDom, parentNode, mboxFile): """ Adds an mbox file container as the next child of a parent. We add the following fields to the document:: absolutePath file/abs_path collectMode file/collect_mode compressMode file/compress_mode The node itself is created as the next child of the parent node. This method only adds one mbox file node. The parent must loop for each mbox file in the C{MboxConfig} object. If C{mboxFile} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param mboxFile: MboxFile to be added to the document. """ if mboxFile is not None: sectionNode = addContainerNode(xmlDom, parentNode, "file") addStringNode(xmlDom, sectionNode, "abs_path", mboxFile.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", mboxFile.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", mboxFile.compressMode) @staticmethod def _addMboxDir(xmlDom, parentNode, mboxDir): """ Adds an mbox directory container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path collectMode dir/collect_mode compressMode dir/compress_mode We also add groups of the following items, one list element per item:: relativeExcludePaths dir/exclude/rel_path excludePatterns dir/exclude/pattern The node itself is created as the next child of the parent node. This method only adds one mbox directory node. The parent must loop for each mbox directory in the C{MboxConfig} object. If C{mboxDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param mboxDir: MboxDir to be added to the document. """ if mboxDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "dir") addStringNode(xmlDom, sectionNode, "abs_path", mboxDir.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", mboxDir.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", mboxDir.compressMode) if ((mboxDir.relativeExcludePaths is not None and mboxDir.relativeExcludePaths != []) or (mboxDir.excludePatterns is not None and mboxDir.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if mboxDir.relativeExcludePaths is not None: for relativePath in mboxDir.relativeExcludePaths: addStringNode(xmlDom, excludeNode, "rel_path", relativePath) if mboxDir.excludePatterns is not None: for pattern in mboxDir.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the mbox backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing mbox extended action.") newRevision = datetime.datetime.today() # mark here so all actions are after this date/time if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) todayIsStart = isStartOfWeek(config.options.startingDay) fullBackup = options.full or todayIsStart logger.debug("Full backup flag is [%s]", fullBackup) if local.mbox.mboxFiles is not None: for mboxFile in local.mbox.mboxFiles: logger.debug("Working with mbox file [%s]", mboxFile.absolutePath) collectMode = _getCollectMode(local, mboxFile) compressMode = _getCompressMode(local, mboxFile) lastRevision = _loadLastRevision(config, mboxFile, fullBackup, collectMode) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("Mbox file meets criteria to be backed up today.") _backupMboxFile(config, mboxFile.absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision) else: logger.debug("Mbox file will not be backed up, per collect mode.") if collectMode == 'incr': _writeNewRevision(config, mboxFile, newRevision) if local.mbox.mboxDirs is not None: for mboxDir in local.mbox.mboxDirs: logger.debug("Working with mbox directory [%s]", mboxDir.absolutePath) collectMode = _getCollectMode(local, mboxDir) compressMode = _getCompressMode(local, mboxDir) lastRevision = _loadLastRevision(config, mboxDir, fullBackup, collectMode) (excludePaths, excludePatterns) = _getExclusions(mboxDir) if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): logger.debug("Mbox directory meets criteria to be backed up today.") _backupMboxDir(config, mboxDir.absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns) else: logger.debug("Mbox directory will not be backed up, per collect mode.") if collectMode == 'incr': _writeNewRevision(config, mboxDir, newRevision) logger.info("Executed the mbox extended action successfully.") def _getCollectMode(local, item): """ Gets the collect mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section. @param local: LocalConfig object. @param item: Mbox file or directory @return: Collect mode to use. """ if item.collectMode is None: collectMode = local.mbox.collectMode else: collectMode = item.collectMode logger.debug("Collect mode is [%s]", collectMode) return collectMode def _getCompressMode(local, item): """ Gets the compress mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section. @param local: LocalConfig object. @param item: Mbox file or directory @return: Compress mode to use. """ if item.compressMode is None: compressMode = local.mbox.compressMode else: compressMode = item.compressMode logger.debug("Compress mode is [%s]", compressMode) return compressMode def _getRevisionPath(config, item): """ Gets the path to the revision file associated with a repository. @param config: Cedar Backup configuration. @param item: Mbox file or directory @return: Absolute path to the revision file associated with the repository. """ normalized = buildNormalizedPath(item.absolutePath) filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) revisionPath = os.path.join(config.options.workingDir, filename) logger.debug("Revision file path is [%s]", revisionPath) return revisionPath def _loadLastRevision(config, item, fullBackup, collectMode): """ Loads the last revision date for this item from disk and returns it. If this is a full backup, or if the revision file cannot be loaded for some reason, then C{None} is returned. This indicates that there is no previous revision, so the entire mail file or directory should be backed up. @note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write. @param config: Cedar Backup configuration. @param item: Mbox file or directory @param fullBackup: Indicates whether this is a full backup @param collectMode: Indicates the collect mode for this item @return: Revision date as a datetime.datetime object or C{None}. """ revisionPath = _getRevisionPath(config, item) if fullBackup: revisionDate = None logger.debug("Revision file ignored because this is a full backup.") elif collectMode in ['weekly', 'daily']: revisionDate = None logger.debug("No revision file based on collect mode [%s].", collectMode) else: logger.debug("Revision file will be used for non-full incremental backup.") if not os.path.isfile(revisionPath): revisionDate = None logger.debug("Revision file [%s] does not exist on disk.", revisionPath) else: try: revisionDate = pickle.load(open(revisionPath, "r")) logger.debug("Loaded revision file [%s] from disk: [%s]", revisionPath, revisionDate) except: revisionDate = None logger.error("Failed loading revision file [%s] from disk.", revisionPath) return revisionDate def _writeNewRevision(config, item, newRevision): """ Writes new revision information to disk. If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception. @note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write. @param config: Cedar Backup configuration. @param item: Mbox file or directory @param newRevision: Revision date as a datetime.datetime object. """ revisionPath = _getRevisionPath(config, item) try: pickle.dump(newRevision, open(revisionPath, "w")) changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) logger.debug("Wrote new revision file [%s] to disk: [%s]", revisionPath, newRevision) except: logger.error("Failed to write revision file [%s] to disk.", revisionPath) def _getExclusions(mboxDir): """ Gets exclusions (file and patterns) associated with an mbox directory. The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the mbox directory's relative exclude paths. The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the mbox directory's list of patterns. @param mboxDir: Mbox directory object. @return: Tuple (files, patterns) indicating what to exclude. """ paths = [] if mboxDir.relativeExcludePaths is not None: for relativePath in mboxDir.relativeExcludePaths: paths.append(os.path.join(mboxDir.absolutePath, relativePath)) patterns = [] if mboxDir.excludePatterns is not None: patterns.extend(mboxDir.excludePatterns) logger.debug("Exclude paths: %s", paths) logger.debug("Exclude patterns: %s", patterns) return(paths, patterns) def _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None): """ Gets the backup file path (including correct extension) associated with an mbox path. We assume that if the target directory is passed in, that we're backing up a directory. Under these circumstances, we'll just use the basename of the individual path as the output file. @note: The backup path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object. @param config: Cedar Backup configuration. @param mboxPath: Path to the indicated mbox file or directory @param compressMode: Compress mode to use for this mbox path @param newRevision: Revision this backup path represents @param targetDir: Target directory in which the path should exist @return: Absolute path to the backup file associated with the repository. """ if targetDir is None: normalizedPath = buildNormalizedPath(mboxPath) revisionDate = newRevision.strftime("%Y%m%d") filename = "mbox-%s-%s" % (revisionDate, normalizedPath) else: filename = os.path.basename(mboxPath) if compressMode == 'gzip': filename = "%s.gz" % filename elif compressMode == 'bzip2': filename = "%s.bz2" % filename if targetDir is None: backupPath = os.path.join(config.collect.targetDir, filename) else: backupPath = os.path.join(targetDir, filename) logger.debug("Backup file path is [%s]", backupPath) return backupPath def _getTarfilePath(config, mboxPath, compressMode, newRevision): """ Gets the tarfile backup file path (including correct extension) associated with an mbox path. Along with the path, the tar archive mode is returned in a form that can be used with L{BackupFileList.generateTarfile}. @note: The tarfile path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object. @param config: Cedar Backup configuration. @param mboxPath: Path to the indicated mbox file or directory @param compressMode: Compress mode to use for this mbox path @param newRevision: Revision this backup path represents @return: Tuple of (absolute path to tarfile, tar archive mode) """ normalizedPath = buildNormalizedPath(mboxPath) revisionDate = newRevision.strftime("%Y%m%d") filename = "mbox-%s-%s.tar" % (revisionDate, normalizedPath) if compressMode == 'gzip': filename = "%s.gz" % filename archiveMode = "targz" elif compressMode == 'bzip2': filename = "%s.bz2" % filename archiveMode = "tarbz2" else: archiveMode = "tar" tarfilePath = os.path.join(config.collect.targetDir, filename) logger.debug("Tarfile path is [%s]", tarfilePath) return (tarfilePath, archiveMode) def _getOutputFile(backupPath, compressMode): """ Opens the output file used for saving backup information. If the compress mode is "gzip", we'll open a C{GzipFile}, and if the compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just return an object from the normal C{open()} method. @param backupPath: Path to file to open. @param compressMode: Compress mode of file ("none", "gzip", "bzip"). @return: Output file object. """ if compressMode == "gzip": return GzipFile(backupPath, "w") elif compressMode == "bzip2": return BZ2File(backupPath, "w") else: return open(backupPath, "w") def _backupMboxFile(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, targetDir=None): """ Backs up an individual mbox file. @param config: Cedar Backup configuration. @param absolutePath: Path to mbox file to back up. @param fullBackup: Indicates whether this should be a full backup. @param collectMode: Indicates the collect mode for this item @param compressMode: Compress mode of file ("none", "gzip", "bzip") @param lastRevision: Date of last backup as datetime.datetime @param newRevision: Date of new (current) backup as datetime.datetime @param targetDir: Target directory to write the backed-up file into @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem backing up the mbox file. """ backupPath = _getBackupPath(config, absolutePath, compressMode, newRevision, targetDir=targetDir) outputFile = _getOutputFile(backupPath, compressMode) if fullBackup or collectMode != "incr" or lastRevision is None: args = [ "-a", "-u", absolutePath, ] # remove duplicates but fetch entire mailbox else: revisionDate = lastRevision.strftime("%Y-%m-%dT%H:%M:%S") # ISO-8601 format; grepmail calls Date::Parse::str2time() args = [ "-a", "-u", "-d", "since %s" % revisionDate, absolutePath, ] command = resolveCommand(GREPMAIL_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] if result != 0: raise IOError("Error [%d] executing grepmail on [%s]." % (result, absolutePath)) logger.debug("Completed backing up mailbox [%s].", absolutePath) return backupPath def _backupMboxDir(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns): """ Backs up a directory containing mbox files. @param config: Cedar Backup configuration. @param absolutePath: Path to mbox directory to back up. @param fullBackup: Indicates whether this should be a full backup. @param collectMode: Indicates the collect mode for this item @param compressMode: Compress mode of file ("none", "gzip", "bzip") @param lastRevision: Date of last backup as datetime.datetime @param newRevision: Date of new (current) backup as datetime.datetime @param excludePaths: List of absolute paths to exclude. @param excludePatterns: List of patterns to exclude. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem backing up the mbox file. """ try: tmpdir = tempfile.mkdtemp(dir=config.options.workingDir) mboxList = FilesystemList() mboxList.excludeDirs = True mboxList.excludePaths = excludePaths mboxList.excludePatterns = excludePatterns mboxList.addDirContents(absolutePath, recursive=False) tarList = BackupFileList() for item in mboxList: backupPath = _backupMboxFile(config, item, fullBackup, collectMode, "none", # no need to compress inside compressed tar lastRevision, newRevision, targetDir=tmpdir) tarList.addFile(backupPath) (tarfilePath, archiveMode) = _getTarfilePath(config, absolutePath, compressMode, newRevision) tarList.generateTarfile(tarfilePath, archiveMode, ignore=True, flat=True) changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) logger.debug("Completed backing up directory [%s].", absolutePath) finally: try: for item in tarList: if os.path.exists(item): try: os.remove(item) except: pass except: pass try: os.rmdir(tmpdir) except: pass CedarBackup2-2.26.5/CedarBackup2/extend/sysinfo.py0000664000175000017500000002154212560016766023354 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to save off important system recovery information. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to save off important system recovery information. This is a simple Cedar Backup extension used to save off important system recovery information. It saves off three types of information: - Currently-installed Debian packages via C{dpkg --get-selections} - Disk partition information via C{fdisk -l} - System-wide mounted filesystem contents, via C{ls -laR} The saved-off information is placed into the collect directory and is compressed using C{bzip2} to save space. This extension relies on the options and collect configurations in the standard Cedar Backup configuration file, but requires no new configuration of its own. No public functions other than the action are exposed since all of this is pretty simple. @note: If the C{dpkg} or C{fdisk} commands cannot be found in their normal locations or executed by the current user, those steps will be skipped and a note will be logged at the INFO level. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from bz2 import BZ2File # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.sysinfo") DPKG_PATH = "/usr/bin/dpkg" FDISK_PATH = "/sbin/fdisk" DPKG_COMMAND = [ DPKG_PATH, "--get-selections", ] FDISK_COMMAND = [ FDISK_PATH, "-l", ] LS_COMMAND = [ "ls", "-laR", "/", ] ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the sysinfo backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If the backup process fails for some reason. """ logger.debug("Executing sysinfo extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") _dumpDebianPackages(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) _dumpPartitionTable(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) _dumpFilesystemContents(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) logger.info("Executed the sysinfo extended action successfully.") def _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True): """ Dumps a list of currently installed Debian packages via C{dpkg}. @param targetDir: Directory to write output file into. @param backupUser: User which should own the resulting file. @param backupGroup: Group which should own the resulting file. @param compress: Indicates whether to compress the output file. @raise IOError: If the dump fails for some reason. """ if not os.path.exists(DPKG_PATH): logger.info("Not executing Debian package dump since %s doesn't seem to exist.", DPKG_PATH) elif not os.access(DPKG_PATH, os.X_OK): logger.info("Not executing Debian package dump since %s cannot be executed.", DPKG_PATH) else: (outputFile, filename) = _getOutputFile(targetDir, "dpkg-selections", compress) try: command = resolveCommand(DPKG_COMMAND) result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] if result != 0: raise IOError("Error [%d] executing Debian package dump." % result) finally: outputFile.close() if not os.path.exists(filename): raise IOError("File [%s] does not seem to exist after Debian package dump finished." % filename) changeOwnership(filename, backupUser, backupGroup) def _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True): """ Dumps information about the partition table via C{fdisk}. @param targetDir: Directory to write output file into. @param backupUser: User which should own the resulting file. @param backupGroup: Group which should own the resulting file. @param compress: Indicates whether to compress the output file. @raise IOError: If the dump fails for some reason. """ if not os.path.exists(FDISK_PATH): logger.info("Not executing partition table dump since %s doesn't seem to exist.", FDISK_PATH) elif not os.access(FDISK_PATH, os.X_OK): logger.info("Not executing partition table dump since %s cannot be executed.", FDISK_PATH) else: (outputFile, filename) = _getOutputFile(targetDir, "fdisk-l", compress) try: command = resolveCommand(FDISK_COMMAND) result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, outputFile=outputFile)[0] if result != 0: raise IOError("Error [%d] executing partition table dump." % result) finally: outputFile.close() if not os.path.exists(filename): raise IOError("File [%s] does not seem to exist after partition table dump finished." % filename) changeOwnership(filename, backupUser, backupGroup) def _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True): """ Dumps complete listing of filesystem contents via C{ls -laR}. @param targetDir: Directory to write output file into. @param backupUser: User which should own the resulting file. @param backupGroup: Group which should own the resulting file. @param compress: Indicates whether to compress the output file. @raise IOError: If the dump fails for some reason. """ (outputFile, filename) = _getOutputFile(targetDir, "ls-laR", compress) try: # Note: can't count on return status from 'ls', so we don't check it. command = resolveCommand(LS_COMMAND) executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile) finally: outputFile.close() if not os.path.exists(filename): raise IOError("File [%s] does not seem to exist after filesystem contents dump finished." % filename) changeOwnership(filename, backupUser, backupGroup) def _getOutputFile(targetDir, name, compress=True): """ Opens the output file used for saving a dump to the filesystem. The filename will be C{name.txt} (or C{name.txt.bz2} if C{compress} is C{True}), written in the target directory. @param targetDir: Target directory to write file in. @param name: Name of the file to create. @param compress: Indicates whether to write compressed output. @return: Tuple of (Output file object, filename) """ filename = os.path.join(targetDir, "%s.txt" % name) if compress: filename = "%s.bz2" % filename logger.debug("Dump file will be [%s].", filename) if compress: outputFile = BZ2File(filename, "w") else: outputFile = open(filename, "w") return (outputFile, filename) CedarBackup2-2.26.5/CedarBackup2/extend/subversion.py0000664000175000017500000016231712560016766024067 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005,2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to back up Subversion repositories. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up Subversion repositories. This is a Cedar Backup extension used to back up Subversion repositories via the Cedar Backup command line. Each Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental. This extension requires a new configuration section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). Although the repository type can be specified in configuration, that information is just kept around for reference. It doesn't affect the backup. Both kinds of repositories are backed up in the same way, using C{svnadmin dump} in an incremental mode. It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do that, then use the normal collect action. This is probably simpler, although it carries its own advantages and disadvantages (plus you will have to be careful to exclude the working directories Subversion uses when building an update to commit). Check the Subversion documentation for more information. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging import pickle from bz2 import BZ2File from gzip import GzipFile # Cedar Backup modules from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList from CedarBackup2.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES from CedarBackup2.filesystem import FilesystemList from CedarBackup2.util import UnorderedList, RegexList from CedarBackup2.util import isStartOfWeek, buildNormalizedPath from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import ObjectTypeList, encodePath, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.subversion") SVNLOOK_COMMAND = [ "svnlook", ] SVNADMIN_COMMAND = [ "svnadmin", ] REVISION_PATH_EXTENSION = "svnlast" ######################################################################## # RepositoryDir class definition ######################################################################## class RepositoryDir(object): """ Class representing Subversion repository directory. A repository directory is a directory that contains one or more Subversion repositories. The following restrictions exist on data in this class: - The directory path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. The repository type value is kept around just for reference. It doesn't affect the behavior of the backup. Relative exclusions are allowed here. However, there is no configured ignore file, because repository dir backups are not recursive. @sort: __init__, __repr__, __str__, __cmp__, directoryPath, collectMode, compressMode """ def __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None): """ Constructor for the C{RepositoryDir} class. @param repositoryType: Type of repository, for reference @param directoryPath: Absolute path of the Subversion parent directory @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. @param relativeExcludePaths: List of relative paths to exclude. @param excludePatterns: List of regular expression patterns to exclude """ self._repositoryType = None self._directoryPath = None self._collectMode = None self._compressMode = None self._relativeExcludePaths = None self._excludePatterns = None self.repositoryType = repositoryType self.directoryPath = directoryPath self.collectMode = collectMode self.compressMode = compressMode self.relativeExcludePaths = relativeExcludePaths self.excludePatterns = excludePatterns def __repr__(self): """ Official string representation for class instance. """ return "RepositoryDir(%s, %s, %s, %s, %s, %s)" % (self.repositoryType, self.directoryPath, self.collectMode, self.compressMode, self.relativeExcludePaths, self.excludePatterns) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.repositoryType != other.repositoryType: if self.repositoryType < other.repositoryType: return -1 else: return 1 if self.directoryPath != other.directoryPath: if self.directoryPath < other.directoryPath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.relativeExcludePaths != other.relativeExcludePaths: if self.relativeExcludePaths < other.relativeExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 return 0 def _setRepositoryType(self, value): """ Property target used to set the repository type. There is no validation; this value is kept around just for reference. """ self._repositoryType = value def _getRepositoryType(self): """ Property target used to get the repository type. """ return self._repositoryType def _setDirectoryPath(self, value): """ Property target used to set the directory path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Repository path must be an absolute path.") self._directoryPath = encodePath(value) def _getDirectoryPath(self): """ Property target used to get the repository path. """ return self._directoryPath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setRelativeExcludePaths(self, value): """ Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._relativeExcludePaths = None else: try: saved = self._relativeExcludePaths self._relativeExcludePaths = UnorderedList() self._relativeExcludePaths.extend(value) except Exception, e: self._relativeExcludePaths = saved raise e def _getRelativeExcludePaths(self): """ Property target used to get the relative exclude paths list. """ return self._relativeExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception, e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") directoryPath = property(_getDirectoryPath, _setDirectoryPath, None, doc="Absolute path of the Subversion parent directory.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.") relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.") ######################################################################## # Repository class definition ######################################################################## class Repository(object): """ Class representing generic Subversion repository configuration.. The following restrictions exist on data in this class: - The respository path must be absolute. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. The repository type value is kept around just for reference. It doesn't affect the behavior of the backup. @sort: __init__, __repr__, __str__, __cmp__, repositoryPath, collectMode, compressMode """ def __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None): """ Constructor for the C{Repository} class. @param repositoryType: Type of repository, for reference @param repositoryPath: Absolute path to a Subversion repository on disk. @param collectMode: Overridden collect mode for this directory. @param compressMode: Overridden compression mode for this directory. """ self._repositoryType = None self._repositoryPath = None self._collectMode = None self._compressMode = None self.repositoryType = repositoryType self.repositoryPath = repositoryPath self.collectMode = collectMode self.compressMode = compressMode def __repr__(self): """ Official string representation for class instance. """ return "Repository(%s, %s, %s, %s)" % (self.repositoryType, self.repositoryPath, self.collectMode, self.compressMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.repositoryType != other.repositoryType: if self.repositoryType < other.repositoryType: return -1 else: return 1 if self.repositoryPath != other.repositoryPath: if self.repositoryPath < other.repositoryPath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 return 0 def _setRepositoryType(self, value): """ Property target used to set the repository type. There is no validation; this value is kept around just for reference. """ self._repositoryType = value def _getRepositoryType(self): """ Property target used to get the repository type. """ return self._repositoryType def _setRepositoryPath(self, value): """ Property target used to set the repository path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Repository path must be an absolute path.") self._repositoryPath = encodePath(value) def _getRepositoryPath(self): """ Property target used to get the repository path. """ return self._repositoryPath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") repositoryPath = property(_getRepositoryPath, _setRepositoryPath, None, doc="Path to the repository to collect.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.") ######################################################################## # SubversionConfig class definition ######################################################################## class SubversionConfig(object): """ Class representing Subversion configuration. Subversion configuration is used for backing up Subversion repositories. The following restrictions exist on data in this class: - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The repositories list must be a list of C{Repository} objects. - The repositoryDirs list must be a list of C{RepositoryDir} objects. For the two lists, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element has the correct type. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, collectMode, compressMode, repositories """ def __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None): """ Constructor for the C{SubversionConfig} class. @param collectMode: Default collect mode. @param compressMode: Default compress mode. @param repositories: List of Subversion repositories to back up. @param repositoryDirs: List of Subversion parent directories to back up. @raise ValueError: If one of the values is invalid. """ self._collectMode = None self._compressMode = None self._repositories = None self._repositoryDirs = None self.collectMode = collectMode self.compressMode = compressMode self.repositories = repositories self.repositoryDirs = repositoryDirs def __repr__(self): """ Official string representation for class instance. """ return "SubversionConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.repositories, self.repositoryDirs) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.repositories != other.repositories: if self.repositories < other.repositories: return -1 else: return 1 if self.repositoryDirs != other.repositoryDirs: if self.repositoryDirs < other.repositoryDirs: return -1 else: return 1 return 0 def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setRepositories(self, value): """ Property target used to set the repositories list. Either the value must be C{None} or each element must be a C{Repository}. @raise ValueError: If the value is not a C{Repository} """ if value is None: self._repositories = None else: try: saved = self._repositories self._repositories = ObjectTypeList(Repository, "Repository") self._repositories.extend(value) except Exception, e: self._repositories = saved raise e def _getRepositories(self): """ Property target used to get the repositories list. """ return self._repositories def _setRepositoryDirs(self, value): """ Property target used to set the repositoryDirs list. Either the value must be C{None} or each element must be a C{Repository}. @raise ValueError: If the value is not a C{Repository} """ if value is None: self._repositoryDirs = None else: try: saved = self._repositoryDirs self._repositoryDirs = ObjectTypeList(RepositoryDir, "RepositoryDir") self._repositoryDirs.extend(value) except Exception, e: self._repositoryDirs = saved raise e def _getRepositoryDirs(self): """ Property target used to get the repositoryDirs list. """ return self._repositoryDirs collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") repositories = property(_getRepositories, _setRepositories, None, doc="List of Subversion repositories to back up.") repositoryDirs = property(_getRepositoryDirs, _setRepositoryDirs, None, doc="List of Subversion parent directories to back up.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Subversion-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, subversion, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._subversion = None self.subversion = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.subversion) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.subversion != other.subversion: if self.subversion < other.subversion: return -1 else: return 1 return 0 def _setSubversion(self, value): """ Property target used to set the subversion configuration value. If not C{None}, the value must be a C{SubversionConfig} object. @raise ValueError: If the value is not a C{SubversionConfig} """ if value is None: self._subversion = None else: if not isinstance(value, SubversionConfig): raise ValueError("Value must be a C{SubversionConfig} object.") self._subversion = value def _getSubversion(self): """ Property target used to get the subversion configuration value. """ return self._subversion subversion = property(_getSubversion, _setSubversion, None, "Subversion configuration in terms of a C{SubversionConfig} object.") def validate(self): """ Validates configuration represented by the object. Subversion configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry. Each repository must contain a repository path, and then must be either able to take collect mode and compress mode configuration from the parent C{SubversionConfig} object, or must set each value on its own. @raise ValueError: If one of the validations fails. """ if self.subversion is None: raise ValueError("Subversion section is required.") if ((self.subversion.repositories is None or len(self.subversion.repositories) < 1) and (self.subversion.repositoryDirs is None or len(self.subversion.repositoryDirs) <1)): raise ValueError("At least one Subversion repository must be configured.") if self.subversion.repositories is not None: for repository in self.subversion.repositories: if repository.repositoryPath is None: raise ValueError("Each repository must set a repository path.") if self.subversion.collectMode is None and repository.collectMode is None: raise ValueError("Collect mode must either be set in parent section or individual repository.") if self.subversion.compressMode is None and repository.compressMode is None: raise ValueError("Compress mode must either be set in parent section or individual repository.") if self.subversion.repositoryDirs is not None: for repositoryDir in self.subversion.repositoryDirs: if repositoryDir.directoryPath is None: raise ValueError("Each repository directory must set a directory path.") if self.subversion.collectMode is None and repositoryDir.collectMode is None: raise ValueError("Collect mode must either be set in parent section or repository directory.") if self.subversion.compressMode is None and repositoryDir.compressMode is None: raise ValueError("Compress mode must either be set in parent section or repository directory.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: collectMode //cb_config/subversion/collectMode compressMode //cb_config/subversion/compressMode We also add groups of the following items, one list element per item:: repository //cb_config/subversion/repository repository_dir //cb_config/subversion/repository_dir @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.subversion is not None: sectionNode = addContainerNode(xmlDom, parentNode, "subversion") addStringNode(xmlDom, sectionNode, "collect_mode", self.subversion.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", self.subversion.compressMode) if self.subversion.repositories is not None: for repository in self.subversion.repositories: LocalConfig._addRepository(xmlDom, sectionNode, repository) if self.subversion.repositoryDirs is not None: for repositoryDir in self.subversion.repositoryDirs: LocalConfig._addRepositoryDir(xmlDom, sectionNode, repositoryDir) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the subversion configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._subversion = LocalConfig._parseSubversion(parentNode) @staticmethod def _parseSubversion(parent): """ Parses a subversion configuration section. We read the following individual fields:: collectMode //cb_config/subversion/collect_mode compressMode //cb_config/subversion/compress_mode We also read groups of the following item, one list element per item:: repositories //cb_config/subversion/repository repository_dirs //cb_config/subversion/repository_dir The repositories are parsed by L{_parseRepositories}, and the repository dirs are parsed by L{_parseRepositoryDirs}. @param parent: Parent node to search beneath. @return: C{SubversionConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ subversion = None section = readFirstChild(parent, "subversion") if section is not None: subversion = SubversionConfig() subversion.collectMode = readString(section, "collect_mode") subversion.compressMode = readString(section, "compress_mode") subversion.repositories = LocalConfig._parseRepositories(section) subversion.repositoryDirs = LocalConfig._parseRepositoryDirs(section) return subversion @staticmethod def _parseRepositories(parent): """ Reads a list of C{Repository} objects from immediately beneath the parent. We read the following individual fields:: repositoryType type repositoryPath abs_path collectMode collect_mode compressMode compess_mode The type field is optional, and its value is kept around only for reference. @param parent: Parent node to search beneath. @return: List of C{Repository} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "repository"): if isElement(entry): repository = Repository() repository.repositoryType = readString(entry, "type") repository.repositoryPath = readString(entry, "abs_path") repository.collectMode = readString(entry, "collect_mode") repository.compressMode = readString(entry, "compress_mode") lst.append(repository) if lst == []: lst = None return lst @staticmethod def _addRepository(xmlDom, parentNode, repository): """ Adds a repository container as the next child of a parent. We add the following fields to the document:: repositoryType repository/type repositoryPath repository/abs_path collectMode repository/collect_mode compressMode repository/compress_mode The node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository in the C{SubversionConfig} object. If C{repository} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param repository: Repository to be added to the document. """ if repository is not None: sectionNode = addContainerNode(xmlDom, parentNode, "repository") addStringNode(xmlDom, sectionNode, "type", repository.repositoryType) addStringNode(xmlDom, sectionNode, "abs_path", repository.repositoryPath) addStringNode(xmlDom, sectionNode, "collect_mode", repository.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", repository.compressMode) @staticmethod def _parseRepositoryDirs(parent): """ Reads a list of C{RepositoryDir} objects from immediately beneath the parent. We read the following individual fields:: repositoryType type directoryPath abs_path collectMode collect_mode compressMode compess_mode We also read groups of the following items, one list element per item:: relativeExcludePaths exclude/rel_path excludePatterns exclude/pattern The exclusions are parsed by L{_parseExclusions}. The type field is optional, and its value is kept around only for reference. @param parent: Parent node to search beneath. @return: List of C{RepositoryDir} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parent, "repository_dir"): if isElement(entry): repositoryDir = RepositoryDir() repositoryDir.repositoryType = readString(entry, "type") repositoryDir.directoryPath = readString(entry, "abs_path") repositoryDir.collectMode = readString(entry, "collect_mode") repositoryDir.compressMode = readString(entry, "compress_mode") (repositoryDir.relativeExcludePaths, repositoryDir.excludePatterns) = LocalConfig._parseExclusions(entry) lst.append(repositoryDir) if lst == []: lst = None return lst @staticmethod def _parseExclusions(parentNode): """ Reads exclusions data from immediately beneath the parent. We read groups of the following items, one list element per item:: relative exclude/rel_path patterns exclude/pattern If there are none of some pattern (i.e. no relative path items) then C{None} will be returned for that item in the tuple. @param parentNode: Parent node to search beneath. @return: Tuple of (relative, patterns) exclusions. """ section = readFirstChild(parentNode, "exclude") if section is None: return (None, None) else: relative = readStringList(section, "rel_path") patterns = readStringList(section, "pattern") return (relative, patterns) @staticmethod def _addRepositoryDir(xmlDom, parentNode, repositoryDir): """ Adds a repository dir container as the next child of a parent. We add the following fields to the document:: repositoryType repository_dir/type directoryPath repository_dir/abs_path collectMode repository_dir/collect_mode compressMode repository_dir/compress_mode We also add groups of the following items, one list element per item:: relativeExcludePaths dir/exclude/rel_path excludePatterns dir/exclude/pattern The node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository dir in the C{SubversionConfig} object. If C{repositoryDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. @param repositoryDir: Repository dir to be added to the document. """ if repositoryDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "repository_dir") addStringNode(xmlDom, sectionNode, "type", repositoryDir.repositoryType) addStringNode(xmlDom, sectionNode, "abs_path", repositoryDir.directoryPath) addStringNode(xmlDom, sectionNode, "collect_mode", repositoryDir.collectMode) addStringNode(xmlDom, sectionNode, "compress_mode", repositoryDir.compressMode) if ((repositoryDir.relativeExcludePaths is not None and repositoryDir.relativeExcludePaths != []) or (repositoryDir.excludePatterns is not None and repositoryDir.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if repositoryDir.relativeExcludePaths is not None: for relativePath in repositoryDir.relativeExcludePaths: addStringNode(xmlDom, excludeNode, "rel_path", relativePath) if repositoryDir.excludePatterns is not None: for pattern in repositoryDir.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the Subversion backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing Subversion extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) todayIsStart = isStartOfWeek(config.options.startingDay) fullBackup = options.full or todayIsStart logger.debug("Full backup flag is [%s]", fullBackup) if local.subversion.repositories is not None: for repository in local.subversion.repositories: _backupRepository(config, local, todayIsStart, fullBackup, repository) if local.subversion.repositoryDirs is not None: for repositoryDir in local.subversion.repositoryDirs: logger.debug("Working with repository directory [%s].", repositoryDir.directoryPath) for repositoryPath in _getRepositoryPaths(repositoryDir): repository = Repository(repositoryDir.repositoryType, repositoryPath, repositoryDir.collectMode, repositoryDir.compressMode) _backupRepository(config, local, todayIsStart, fullBackup, repository) logger.info("Completed backing up Subversion repository directory [%s].", repositoryDir.directoryPath) logger.info("Executed the Subversion extended action successfully.") def _getCollectMode(local, repository): """ Gets the collect mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section. @param repository: Repository object. @return: Collect mode to use. """ if repository.collectMode is None: collectMode = local.subversion.collectMode else: collectMode = repository.collectMode logger.debug("Collect mode is [%s]", collectMode) return collectMode def _getCompressMode(local, repository): """ Gets the compress mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section. @param local: LocalConfig object. @param repository: Repository object. @return: Compress mode to use. """ if repository.compressMode is None: compressMode = local.subversion.compressMode else: compressMode = repository.compressMode logger.debug("Compress mode is [%s]", compressMode) return compressMode def _getRevisionPath(config, repository): """ Gets the path to the revision file associated with a repository. @param config: Config object. @param repository: Repository object. @return: Absolute path to the revision file associated with the repository. """ normalized = buildNormalizedPath(repository.repositoryPath) filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) revisionPath = os.path.join(config.options.workingDir, filename) logger.debug("Revision file path is [%s]", revisionPath) return revisionPath def _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision): """ Gets the backup file path (including correct extension) associated with a repository. @param config: Config object. @param repositoryPath: Path to the indicated repository @param compressMode: Compress mode to use for this repository. @param startRevision: Starting repository revision. @param endRevision: Ending repository revision. @return: Absolute path to the backup file associated with the repository. """ normalizedPath = buildNormalizedPath(repositoryPath) filename = "svndump-%d:%d-%s.txt" % (startRevision, endRevision, normalizedPath) if compressMode == 'gzip': filename = "%s.gz" % filename elif compressMode == 'bzip2': filename = "%s.bz2" % filename backupPath = os.path.join(config.collect.targetDir, filename) logger.debug("Backup file path is [%s]", backupPath) return backupPath def _getRepositoryPaths(repositoryDir): """ Gets a list of child repository paths within a repository directory. @param repositoryDir: RepositoryDirectory """ (excludePaths, excludePatterns) = _getExclusions(repositoryDir) fsList = FilesystemList() fsList.excludeFiles = True fsList.excludeLinks = True fsList.excludePaths = excludePaths fsList.excludePatterns = excludePatterns fsList.addDirContents(path=repositoryDir.directoryPath, recursive=False, addSelf=False) return fsList def _getExclusions(repositoryDir): """ Gets exclusions (file and patterns) associated with an repository directory. The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the repository directory's relative exclude paths. The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the repository directory's list of patterns. @param repositoryDir: Repository directory object. @return: Tuple (files, patterns) indicating what to exclude. """ paths = [] if repositoryDir.relativeExcludePaths is not None: for relativePath in repositoryDir.relativeExcludePaths: paths.append(os.path.join(repositoryDir.directoryPath, relativePath)) patterns = [] if repositoryDir.excludePatterns is not None: patterns.extend(repositoryDir.excludePatterns) logger.debug("Exclude paths: %s", paths) logger.debug("Exclude patterns: %s", patterns) return(paths, patterns) def _backupRepository(config, local, todayIsStart, fullBackup, repository): """ Backs up an individual Subversion repository. This internal method wraps the public methods and adds some functionality to work better with the extended action itself. @param config: Cedar Backup configuration. @param local: Local configuration @param todayIsStart: Indicates whether today is start of week @param fullBackup: Full backup flag @param repository: Repository to operate on @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the Subversion dump. """ logger.debug("Working with repository [%s]", repository.repositoryPath) logger.debug("Repository type is [%s]", repository.repositoryType) collectMode = _getCollectMode(local, repository) compressMode = _getCompressMode(local, repository) revisionPath = _getRevisionPath(config, repository) if not (fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart)): logger.debug("Repository will not be backed up, per collect mode.") return logger.debug("Repository meets criteria to be backed up today.") if collectMode != "incr" or fullBackup: startRevision = 0 endRevision = getYoungestRevision(repository.repositoryPath) logger.debug("Using full backup, revision: (%d, %d).", startRevision, endRevision) else: if fullBackup: startRevision = 0 endRevision = getYoungestRevision(repository.repositoryPath) else: startRevision = _loadLastRevision(revisionPath) + 1 endRevision = getYoungestRevision(repository.repositoryPath) if startRevision > endRevision: logger.info("No need to back up repository [%s]; no new revisions.", repository.repositoryPath) return logger.debug("Using incremental backup, revision: (%d, %d).", startRevision, endRevision) backupPath = _getBackupPath(config, repository.repositoryPath, compressMode, startRevision, endRevision) outputFile = _getOutputFile(backupPath, compressMode) try: backupRepository(repository.repositoryPath, outputFile, startRevision, endRevision) finally: outputFile.close() if not os.path.exists(backupPath): raise IOError("Dump file [%s] does not seem to exist after backup completed." % backupPath) changeOwnership(backupPath, config.options.backupUser, config.options.backupGroup) if collectMode == "incr": _writeLastRevision(config, revisionPath, endRevision) logger.info("Completed backing up Subversion repository [%s].", repository.repositoryPath) def _getOutputFile(backupPath, compressMode): """ Opens the output file used for saving the Subversion dump. If the compress mode is "gzip", we'll open a C{GzipFile}, and if the compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just return an object from the normal C{open()} method. @param backupPath: Path to file to open. @param compressMode: Compress mode of file ("none", "gzip", "bzip"). @return: Output file object. """ if compressMode == "gzip": return GzipFile(backupPath, "w") elif compressMode == "bzip2": return BZ2File(backupPath, "w") else: return open(backupPath, "w") def _loadLastRevision(revisionPath): """ Loads the indicated revision file from disk into an integer. If we can't load the revision file successfully (either because it doesn't exist or for some other reason), then a revision of -1 will be returned - but the condition will be logged. This way, we err on the side of backing up too much, because anyone using this will presumably be adding 1 to the revision, so they don't duplicate any backups. @param revisionPath: Path to the revision file on disk. @return: Integer representing last backed-up revision, -1 on error or if none can be read. """ if not os.path.isfile(revisionPath): startRevision = -1 logger.debug("Revision file [%s] does not exist on disk.", revisionPath) else: try: startRevision = pickle.load(open(revisionPath, "r")) logger.debug("Loaded revision file [%s] from disk: %d.", revisionPath, startRevision) except: startRevision = -1 logger.error("Failed loading revision file [%s] from disk.", revisionPath) return startRevision def _writeLastRevision(config, revisionPath, endRevision): """ Writes the end revision to the indicated revision file on disk. If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception. @param config: Config object. @param revisionPath: Path to the revision file on disk. @param endRevision: Last revision backed up on this run. """ try: pickle.dump(endRevision, open(revisionPath, "w")) changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) logger.debug("Wrote new revision file [%s] to disk: %d.", revisionPath, endRevision) except: logger.error("Failed to write revision file [%s] to disk.", revisionPath) ############################## # backupRepository() function ############################## def backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None): """ Backs up an individual Subversion repository. The starting and ending revision values control an incremental backup. If the starting revision is not passed in, then revision zero (the start of the repository) is assumed. If the ending revision is not passed in, then the youngest revision in the database will be used as the endpoint. The backup data will be written into the passed-in back file. Normally, this would be an object as returned from C{open}, but it is possible to use something like a C{GzipFile} to write compressed output. The caller is responsible for closing the passed-in backup file. @note: This function should either be run as root or as the owner of the Subversion repository. @note: It is apparently I{not} a good idea to interrupt this function. Sometimes, this leaves the repository in a "wedged" state, which requires recovery using C{svnadmin recover}. @param repositoryPath: Path to Subversion repository to back up @type repositoryPath: String path representing Subversion repository on disk. @param backupFile: Python file object to use for writing backup. @type backupFile: Python file object as from C{open()} or C{file()}. @param startRevision: Starting repository revision to back up (for incremental backups) @type startRevision: Integer value >= 0. @param endRevision: Ending repository revision to back up (for incremental backups) @type endRevision: Integer value >= 0. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the Subversion dump. """ if startRevision is None: startRevision = 0 if endRevision is None: endRevision = getYoungestRevision(repositoryPath) if int(startRevision) < 0: raise ValueError("Start revision must be >= 0.") if int(endRevision) < 0: raise ValueError("End revision must be >= 0.") if startRevision > endRevision: raise ValueError("Start revision must be <= end revision.") args = [ "dump", "--quiet", "-r%s:%s" % (startRevision, endRevision), "--incremental", repositoryPath, ] command = resolveCommand(SVNADMIN_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] if result != 0: raise IOError("Error [%d] executing Subversion dump for repository [%s]." % (result, repositoryPath)) logger.debug("Completed dumping subversion repository [%s].", repositoryPath) ################################# # getYoungestRevision() function ################################# def getYoungestRevision(repositoryPath): """ Gets the youngest (newest) revision in a Subversion repository using C{svnlook}. @note: This function should either be run as root or as the owner of the Subversion repository. @param repositoryPath: Path to Subversion repository to look in. @type repositoryPath: String path representing Subversion repository on disk. @return: Youngest revision as an integer. @raise ValueError: If there is a problem parsing the C{svnlook} output. @raise IOError: If there is a problem executing the C{svnlook} command. """ args = [ 'youngest', repositoryPath, ] command = resolveCommand(SVNLOOK_COMMAND) (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) if result != 0: raise IOError("Error [%d] executing 'svnlook youngest' for repository [%s]." % (result, repositoryPath)) if len(output) != 1: raise ValueError("Unable to parse 'svnlook youngest' output.") return int(output[0]) ######################################################################## # Deprecated functionality ######################################################################## class BDBRepository(Repository): """ Class representing Subversion BDB (Berkeley Database) repository configuration. This object is deprecated. Use a simple L{Repository} instead. """ def __init__(self, repositoryPath=None, collectMode=None, compressMode=None): """ Constructor for the C{BDBRepository} class. """ super(BDBRepository, self).__init__("BDB", repositoryPath, collectMode, compressMode) def __repr__(self): """ Official string representation for class instance. """ return "BDBRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode) class FSFSRepository(Repository): """ Class representing Subversion FSFS repository configuration. This object is deprecated. Use a simple L{Repository} instead. """ def __init__(self, repositoryPath=None, collectMode=None, compressMode=None): """ Constructor for the C{FSFSRepository} class. """ super(FSFSRepository, self).__init__("FSFS", repositoryPath, collectMode, compressMode) def __repr__(self): """ Official string representation for class instance. """ return "FSFSRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode) def backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None): """ Backs up an individual Subversion BDB repository. This function is deprecated. Use L{backupRepository} instead. """ return backupRepository(repositoryPath, backupFile, startRevision, endRevision) def backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None): """ Backs up an individual Subversion FSFS repository. This function is deprecated. Use L{backupRepository} instead. """ return backupRepository(repositoryPath, backupFile, startRevision, endRevision) CedarBackup2-2.26.5/CedarBackup2/extend/encrypt.py0000664000175000017500000004652712560016766023360 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to encrypt staging directories. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to encrypt staging directories. When this extension is executed, all backed-up files in the configured Cedar Backup staging directory will be encrypted using gpg. Any directory which has already been encrypted (as indicated by the C{cback.encrypt} file) will be ignored. This extension requires a new configuration section and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup2.xmlutil import readFirstChild, readString from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.encrypt") GPG_COMMAND = [ "gpg", ] VALID_ENCRYPT_MODES = [ "gpg", ] ENCRYPT_INDICATOR = "cback.encrypt" ######################################################################## # EncryptConfig class definition ######################################################################## class EncryptConfig(object): """ Class representing encrypt configuration. Encrypt configuration is used for encrypting staging directories. The following restrictions exist on data in this class: - The encrypt mode must be one of the values in L{VALID_ENCRYPT_MODES} - The encrypt target value must be a non-empty string @sort: __init__, __repr__, __str__, __cmp__, encryptMode, encryptTarget """ def __init__(self, encryptMode=None, encryptTarget=None): """ Constructor for the C{EncryptConfig} class. @param encryptMode: Encryption mode @param encryptTarget: Encryption target (for instance, GPG recipient) @raise ValueError: If one of the values is invalid. """ self._encryptMode = None self._encryptTarget = None self.encryptMode = encryptMode self.encryptTarget = encryptTarget def __repr__(self): """ Official string representation for class instance. """ return "EncryptConfig(%s, %s)" % (self.encryptMode, self.encryptTarget) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.encryptMode != other.encryptMode: if self.encryptMode < other.encryptMode: return -1 else: return 1 if self.encryptTarget != other.encryptTarget: if self.encryptTarget < other.encryptTarget: return -1 else: return 1 return 0 def _setEncryptMode(self, value): """ Property target used to set the encrypt mode. If not C{None}, the mode must be one of the values in L{VALID_ENCRYPT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ENCRYPT_MODES: raise ValueError("Encrypt mode must be one of %s." % VALID_ENCRYPT_MODES) self._encryptMode = value def _getEncryptMode(self): """ Property target used to get the encrypt mode. """ return self._encryptMode def _setEncryptTarget(self, value): """ Property target used to set the encrypt target. """ if value is not None: if len(value) < 1: raise ValueError("Encrypt target must be non-empty string.") self._encryptTarget = value def _getEncryptTarget(self): """ Property target used to get the encrypt target. """ return self._encryptTarget encryptMode = property(_getEncryptMode, _setEncryptMode, None, doc="Encrypt mode.") encryptTarget = property(_getEncryptTarget, _setEncryptTarget, None, doc="Encrypt target (i.e. GPG recipient).") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit encrypt-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, encrypt, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._encrypt = None self.encrypt = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.encrypt) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.encrypt != other.encrypt: if self.encrypt < other.encrypt: return -1 else: return 1 return 0 def _setEncrypt(self, value): """ Property target used to set the encrypt configuration value. If not C{None}, the value must be a C{EncryptConfig} object. @raise ValueError: If the value is not a C{EncryptConfig} """ if value is None: self._encrypt = None else: if not isinstance(value, EncryptConfig): raise ValueError("Value must be a C{EncryptConfig} object.") self._encrypt = value def _getEncrypt(self): """ Property target used to get the encrypt configuration value. """ return self._encrypt encrypt = property(_getEncrypt, _setEncrypt, None, "Encrypt configuration in terms of a C{EncryptConfig} object.") def validate(self): """ Validates configuration represented by the object. Encrypt configuration must be filled in. Within that, both the encrypt mode and encrypt target must be filled in. @raise ValueError: If one of the validations fails. """ if self.encrypt is None: raise ValueError("Encrypt section is required.") if self.encrypt.encryptMode is None: raise ValueError("Encrypt mode must be set.") if self.encrypt.encryptTarget is None: raise ValueError("Encrypt target must be set.") def addConfig(self, xmlDom, parentNode): """ Adds an configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: encryptMode //cb_config/encrypt/encrypt_mode encryptTarget //cb_config/encrypt/encrypt_target @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.encrypt is not None: sectionNode = addContainerNode(xmlDom, parentNode, "encrypt") addStringNode(xmlDom, sectionNode, "encrypt_mode", self.encrypt.encryptMode) addStringNode(xmlDom, sectionNode, "encrypt_target", self.encrypt.encryptTarget) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the encrypt configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._encrypt = LocalConfig._parseEncrypt(parentNode) @staticmethod def _parseEncrypt(parent): """ Parses an encrypt configuration section. We read the following individual fields:: encryptMode //cb_config/encrypt/encrypt_mode encryptTarget //cb_config/encrypt/encrypt_target @param parent: Parent node to search beneath. @return: C{EncryptConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ encrypt = None section = readFirstChild(parent, "encrypt") if section is not None: encrypt = EncryptConfig() encrypt.encryptMode = readString(section, "encrypt_mode") encrypt.encryptTarget = readString(section, "encrypt_target") return encrypt ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the encrypt backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing encrypt extended action.") if config.options is None or config.stage is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if local.encrypt.encryptMode not in ["gpg", ]: raise ValueError("Unknown encrypt mode [%s]" % local.encrypt.encryptMode) if local.encrypt.encryptMode == "gpg": _confirmGpgRecipient(local.encrypt.encryptTarget) dailyDirs = findDailyDirs(config.stage.targetDir, ENCRYPT_INDICATOR) for dailyDir in dailyDirs: _encryptDailyDir(dailyDir, local.encrypt.encryptMode, local.encrypt.encryptTarget, config.options.backupUser, config.options.backupGroup) writeIndicatorFile(dailyDir, ENCRYPT_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the encrypt extended action successfully.") ############################## # _encryptDailyDir() function ############################## def _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup): """ Encrypts the contents of a daily staging directory. Indicator files are ignored. All other files are encrypted. The only valid encrypt mode is C{"gpg"}. @param dailyDir: Daily directory to encrypt @param encryptMode: Encryption mode (only "gpg" is allowed) @param encryptTarget: Encryption target (GPG recipient for "gpg" mode) @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @raise ValueError: If the encrypt mode is not supported. @raise ValueError: If the daily staging directory does not exist. """ logger.debug("Begin encrypting contents of [%s].", dailyDir) fileList = getBackupFiles(dailyDir) # ignores indicator files for path in fileList: _encryptFile(path, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=True) logger.debug("Completed encrypting contents of [%s].", dailyDir) ########################## # _encryptFile() function ########################## def _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False): """ Encrypts the source file using the indicated mode. The encrypted file will be owned by the indicated backup user and group. If C{removeSource} is C{True}, then the source file will be removed after it is successfully encrypted. Currently, only the C{"gpg"} encrypt mode is supported. @param sourcePath: Absolute path of the source file to encrypt @param encryptMode: Encryption mode (only "gpg" is allowed) @param encryptTarget: Encryption target (GPG recipient) @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @param removeSource: Indicates whether to remove the source file @return: Path to the newly-created encrypted file. @raise ValueError: If an invalid encrypt mode is passed in. @raise IOError: If there is a problem accessing, encrypting or removing the source file. """ if not os.path.exists(sourcePath): raise ValueError("Source path [%s] does not exist." % sourcePath) if encryptMode == 'gpg': encryptedPath = _encryptFileWithGpg(sourcePath, recipient=encryptTarget) else: raise ValueError("Unknown encrypt mode [%s]" % encryptMode) changeOwnership(encryptedPath, backupUser, backupGroup) if removeSource: if os.path.exists(sourcePath): try: os.remove(sourcePath) logger.debug("Completed removing old file [%s].", sourcePath) except: raise IOError("Failed to remove file [%s] after encrypting it." % (sourcePath)) return encryptedPath ################################# # _encryptFileWithGpg() function ################################# def _encryptFileWithGpg(sourcePath, recipient): """ Encrypts the indicated source file using GPG. The encrypted file will be in GPG's binary output format and will have the same name as the source file plus a C{".gpg"} extension. The source file will not be modified or removed by this function call. @param sourcePath: Absolute path of file to be encrypted. @param recipient: Recipient name to be passed to GPG's C{"-r"} option @return: Path to the newly-created encrypted file. @raise IOError: If there is a problem encrypting the file. """ encryptedPath = "%s.gpg" % sourcePath command = resolveCommand(GPG_COMMAND) args = [ "--batch", "--yes", "-e", "-r", recipient, "-o", encryptedPath, sourcePath, ] result = executeCommand(command, args)[0] if result != 0: raise IOError("Error [%d] calling gpg to encrypt [%s]." % (result, sourcePath)) if not os.path.exists(encryptedPath): raise IOError("After call to [%s], encrypted file [%s] does not exist." % (command, encryptedPath)) logger.debug("Completed encrypting file [%s] to [%s].", sourcePath, encryptedPath) return encryptedPath ################################# # _confirmGpgRecpient() function ################################# def _confirmGpgRecipient(recipient): """ Confirms that a recipient's public key is known to GPG. Throws an exception if there is a problem, or returns normally otherwise. @param recipient: Recipient name @raise IOError: If the recipient's public key is not known to GPG. """ command = resolveCommand(GPG_COMMAND) args = [ "--batch", "-k", recipient, ] # should use --with-colons if the output will be parsed result = executeCommand(command, args)[0] if result != 0: raise IOError("GPG unable to find public key for [%s]." % recipient) CedarBackup2-2.26.5/CedarBackup2/extend/postgresql.py0000664000175000017500000005605712642024010024054 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006,2010 Kenneth J. Pronovici. # Copyright (c) 2006 Antoine Beaupre. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Antoine Beaupre # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to back up PostgreSQL databases. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # This file was created with a width of 132 characters, and NO tabs. ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up PostgreSQL databases. This is a Cedar Backup extension used to back up PostgreSQL databases via the Cedar Backup command line. It requires a new configurations section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. The backup is done via the C{pg_dump} or C{pg_dumpall} commands included with the PostgreSQL product. Output can be compressed using C{gzip} or C{bzip2}. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the C{pg_dump} client. This can be accomplished using appropriate voodoo in the C{pg_hda.conf} file. Note that this code always produces a full backup. There is currently no facility for making incremental backups. You should always make C{/etc/cback.conf} unreadble to non-root users once you place postgresql configuration into it, since postgresql configuration will contain information about available PostgreSQL databases and usernames. Use of this extension I{may} expose usernames in the process listing (via C{ps}) when the backup is running if the username is specified in the configuration. @author: Kenneth J. Pronovici @author: Antoine Beaupre """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from gzip import GzipFile from bz2 import BZ2File # Cedar Backup modules from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode from CedarBackup2.xmlutil import readFirstChild, readString, readStringList, readBoolean from CedarBackup2.config import VALID_COMPRESS_MODES from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import ObjectTypeList, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.postgresql") POSTGRESQLDUMP_COMMAND = [ "pg_dump", ] POSTGRESQLDUMPALL_COMMAND = [ "pg_dumpall", ] ######################################################################## # PostgresqlConfig class definition ######################################################################## class PostgresqlConfig(object): """ Class representing PostgreSQL configuration. The PostgreSQL configuration information is used for backing up PostgreSQL databases. The following restrictions exist on data in this class: - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The 'all' flag must be 'Y' if no databases are defined. - The 'all' flag must be 'N' if any databases are defined. - Any values in the databases list must be strings. @sort: __init__, __repr__, __str__, __cmp__, user, all, databases """ def __init__(self, user=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622 """ Constructor for the C{PostgresqlConfig} class. @param user: User to execute backup as. @param compressMode: Compress mode for backed-up files. @param all: Indicates whether to back up all databases. @param databases: List of databases to back up. """ self._user = None self._compressMode = None self._all = None self._databases = None self.user = user self.compressMode = compressMode self.all = all self.databases = databases def __repr__(self): """ Official string representation for class instance. """ return "PostgresqlConfig(%s, %s, %s)" % (self.user, self.all, self.databases) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.user != other.user: if self.user < other.user: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.all != other.all: if self.all < other.all: return -1 else: return 1 if self.databases != other.databases: if self.databases < other.databases: return -1 else: return 1 return 0 def _setUser(self, value): """ Property target used to set the user value. """ if value is not None: if len(value) < 1: raise ValueError("User must be non-empty string.") self._user = value def _getUser(self): """ Property target used to get the user value. """ return self._user def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setAll(self, value): """ Property target used to set the 'all' flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._all = True else: self._all = False def _getAll(self): """ Property target used to get the 'all' flag. """ return self._all def _setDatabases(self, value): """ Property target used to set the databases list. Either the value must be C{None} or each element must be a string. @raise ValueError: If the value is not a string. """ if value is None: self._databases = None else: for database in value: if len(database) < 1: raise ValueError("Each database must be a non-empty string.") try: saved = self._databases self._databases = ObjectTypeList(basestring, "string") self._databases.extend(value) except Exception, e: self._databases = saved raise e def _getDatabases(self): """ Property target used to get the databases list. """ return self._databases user = property(_getUser, _setUser, None, "User to execute backup as.") compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit PostgreSQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, postgresql, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._postgresql = None self.postgresql = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.postgresql) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.postgresql != other.postgresql: if self.postgresql < other.postgresql: return -1 else: return 1 return 0 def _setPostgresql(self, value): """ Property target used to set the postgresql configuration value. If not C{None}, the value must be a C{PostgresqlConfig} object. @raise ValueError: If the value is not a C{PostgresqlConfig} """ if value is None: self._postgresql = None else: if not isinstance(value, PostgresqlConfig): raise ValueError("Value must be a C{PostgresqlConfig} object.") self._postgresql = value def _getPostgresql(self): """ Property target used to get the postgresql configuration value. """ return self._postgresql postgresql = property(_getPostgresql, _setPostgresql, None, "Postgresql configuration in terms of a C{PostgresqlConfig} object.") def validate(self): """ Validates configuration represented by the object. The compress mode must be filled in. Then, if the 'all' flag I{is} set, no databases are allowed, and if the 'all' flag is I{not} set, at least one database is required. @raise ValueError: If one of the validations fails. """ if self.postgresql is None: raise ValueError("PostgreSQL section is required.") if self.postgresql.compressMode is None: raise ValueError("Compress mode value is required.") if self.postgresql.all: if self.postgresql.databases is not None and self.postgresql.databases != []: raise ValueError("Databases cannot be specified if 'all' flag is set.") else: if self.postgresql.databases is None or len(self.postgresql.databases) < 1: raise ValueError("At least one PostgreSQL database must be indicated if 'all' flag is not set.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: user //cb_config/postgresql/user compressMode //cb_config/postgresql/compress_mode all //cb_config/postgresql/all We also add groups of the following items, one list element per item:: database //cb_config/postgresql/database @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.postgresql is not None: sectionNode = addContainerNode(xmlDom, parentNode, "postgresql") addStringNode(xmlDom, sectionNode, "user", self.postgresql.user) addStringNode(xmlDom, sectionNode, "compress_mode", self.postgresql.compressMode) addBooleanNode(xmlDom, sectionNode, "all", self.postgresql.all) if self.postgresql.databases is not None: for database in self.postgresql.databases: addStringNode(xmlDom, sectionNode, "database", database) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the postgresql configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._postgresql = LocalConfig._parsePostgresql(parentNode) @staticmethod def _parsePostgresql(parent): """ Parses a postgresql configuration section. We read the following fields:: user //cb_config/postgresql/user compressMode //cb_config/postgresql/compress_mode all //cb_config/postgresql/all We also read groups of the following item, one list element per item:: databases //cb_config/postgresql/database @param parent: Parent node to search beneath. @return: C{PostgresqlConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ postgresql = None section = readFirstChild(parent, "postgresql") if section is not None: postgresql = PostgresqlConfig() postgresql.user = readString(section, "user") postgresql.compressMode = readString(section, "compress_mode") postgresql.all = readBoolean(section, "all") postgresql.databases = readStringList(section, "database") return postgresql ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the PostgreSQL backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing PostgreSQL extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if local.postgresql.all: logger.info("Backing up all databases.") _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, config.options.backupUser, config.options.backupGroup, None) if local.postgresql.databases is not None and local.postgresql.databases != []: logger.debug("Backing up %d individual databases.", len(local.postgresql.databases)) for database in local.postgresql.databases: logger.info("Backing up database [%s].", database) _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, config.options.backupUser, config.options.backupGroup, database) logger.info("Executed the PostgreSQL extended action successfully.") def _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None): """ Backs up an individual PostgreSQL database, or all databases. This internal method wraps the public method and adds some functionality, like figuring out a filename, etc. @param targetDir: Directory into which backups should be written. @param compressMode: Compress mode to be used for backed-up files. @param user: User to use for connecting to the database. @param backupUser: User to own resulting file. @param backupGroup: Group to own resulting file. @param database: Name of database, or C{None} for all databases. @return: Name of the generated backup file. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the PostgreSQL dump. """ (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) try: backupDatabase(user, outputFile, database) finally: outputFile.close() if not os.path.exists(filename): raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) changeOwnership(filename, backupUser, backupGroup) # pylint: disable=R0204 def _getOutputFile(targetDir, database, compressMode): """ Opens the output file used for saving the PostgreSQL dump. The filename is either C{"postgresqldump.txt"} or C{"postgresqldump-.txt"}. The C{".gz"} or C{".bz2"} extension is added if C{compress} is C{True}. @param targetDir: Target directory to write file in. @param database: Name of the database (if any) @param compressMode: Compress mode to be used for backed-up files. @return: Tuple of (Output file object, filename) """ if database is None: filename = os.path.join(targetDir, "postgresqldump.txt") else: filename = os.path.join(targetDir, "postgresqldump-%s.txt" % database) if compressMode == "gzip": filename = "%s.gz" % filename outputFile = GzipFile(filename, "w") elif compressMode == "bzip2": filename = "%s.bz2" % filename outputFile = BZ2File(filename, "w") else: outputFile = open(filename, "w") logger.debug("PostgreSQL dump file will be [%s].", filename) return (outputFile, filename) ############################ # backupDatabase() function ############################ def backupDatabase(user, backupFile, database=None): """ Backs up an individual PostgreSQL database, or all databases. This function backs up either a named local PostgreSQL database or all local PostgreSQL databases, using the passed in user for connectivity. This is I{always} a full backup. There is no facility for incremental backups. The backup data will be written into the passed-in back file. Normally, this would be an object as returned from C{open()}, but it is possible to use something like a C{GzipFile} to write compressed output. The caller is responsible for closing the passed-in backup file. @note: Typically, you would use the C{root} user to back up all databases. @param user: User to use for connecting to the database. @type user: String representing PostgreSQL username. @param backupFile: File use for writing backup. @type backupFile: Python file object as from C{open()} or C{file()}. @param database: Name of the database to be backed up. @type database: String representing database name, or C{None} for all databases. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the PostgreSQL dump. """ args = [] if user is not None: args.append('-U') args.append(user) if database is None: command = resolveCommand(POSTGRESQLDUMPALL_COMMAND) else: command = resolveCommand(POSTGRESQLDUMP_COMMAND) args.append(database) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] if result != 0: if database is None: raise IOError("Error [%d] executing PostgreSQL database dump for all databases." % result) else: raise IOError("Error [%d] executing PostgreSQL database dump for database [%s]." % (result, database)) CedarBackup2-2.26.5/CedarBackup2/extend/__init__.py0000664000175000017500000000263012560016766023416 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Official Cedar Backup Extensions This package provides official Cedar Backup extensions. These are Cedar Backup actions that are not part of the "standard" set of Cedar Backup actions, but are officially supported along with Cedar Backup. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2.extend import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'amazons3', 'encrypt', 'mbox', 'mysql', 'postgresql', 'split', 'subversion', 'sysinfo', ] CedarBackup2-2.26.5/CedarBackup2/extend/mysql.py0000664000175000017500000006317612642024111023020 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to back up MySQL databases. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to back up MySQL databases. This is a Cedar Backup extension used to back up MySQL databases via the Cedar Backup command line. It requires a new configuration section and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file. The backup is done via the C{mysqldump} command included with the MySQL product. Output can be compressed using C{gzip} or C{bzip2}. Administrators can configure the extension either to back up all databases or to back up only specific databases. Note that this code always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I'll update this extension or provide another. The extension assumes that all configured databases can be backed up by a single user. Often, the "root" database user will be used. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) various databases as needed. This second option is probably the best choice. The extension accepts a username and password in configuration. However, you probably do not want to provide those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to C{mysqldump} via the command-line C{--user} and C{--password} switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in C{/root/.my.cnf}:: [mysqldump] user = root password = Regardless of whether you are using C{~/.my.cnf} or C{/etc/cback.conf} to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode C{0600}). @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import logging from gzip import GzipFile from bz2 import BZ2File # Cedar Backup modules from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode from CedarBackup2.xmlutil import readFirstChild, readString, readStringList, readBoolean from CedarBackup2.config import VALID_COMPRESS_MODES from CedarBackup2.util import resolveCommand, executeCommand from CedarBackup2.util import ObjectTypeList, changeOwnership ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.mysql") MYSQLDUMP_COMMAND = [ "mysqldump", ] ######################################################################## # MysqlConfig class definition ######################################################################## class MysqlConfig(object): """ Class representing MySQL configuration. The MySQL configuration information is used for backing up MySQL databases. The following restrictions exist on data in this class: - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. - The 'all' flag must be 'Y' if no databases are defined. - The 'all' flag must be 'N' if any databases are defined. - Any values in the databases list must be strings. @sort: __init__, __repr__, __str__, __cmp__, user, password, all, databases """ def __init__(self, user=None, password=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622 """ Constructor for the C{MysqlConfig} class. @param user: User to execute backup as. @param password: Password associated with user. @param compressMode: Compress mode for backed-up files. @param all: Indicates whether to back up all databases. @param databases: List of databases to back up. """ self._user = None self._password = None self._compressMode = None self._all = None self._databases = None self.user = user self.password = password self.compressMode = compressMode self.all = all self.databases = databases def __repr__(self): """ Official string representation for class instance. """ return "MysqlConfig(%s, %s, %s, %s)" % (self.user, self.password, self.all, self.databases) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.user != other.user: if self.user < other.user: return -1 else: return 1 if self.password != other.password: if self.password < other.password: return -1 else: return 1 if self.compressMode != other.compressMode: if self.compressMode < other.compressMode: return -1 else: return 1 if self.all != other.all: if self.all < other.all: return -1 else: return 1 if self.databases != other.databases: if self.databases < other.databases: return -1 else: return 1 return 0 def _setUser(self, value): """ Property target used to set the user value. """ if value is not None: if len(value) < 1: raise ValueError("User must be non-empty string.") self._user = value def _getUser(self): """ Property target used to get the user value. """ return self._user def _setPassword(self, value): """ Property target used to set the password value. """ if value is not None: if len(value) < 1: raise ValueError("Password must be non-empty string.") self._password = value def _getPassword(self): """ Property target used to get the password value. """ return self._password def _setCompressMode(self, value): """ Property target used to set the compress mode. If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COMPRESS_MODES: raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) self._compressMode = value def _getCompressMode(self): """ Property target used to get the compress mode. """ return self._compressMode def _setAll(self, value): """ Property target used to set the 'all' flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._all = True else: self._all = False def _getAll(self): """ Property target used to get the 'all' flag. """ return self._all def _setDatabases(self, value): """ Property target used to set the databases list. Either the value must be C{None} or each element must be a string. @raise ValueError: If the value is not a string. """ if value is None: self._databases = None else: for database in value: if len(database) < 1: raise ValueError("Each database must be a non-empty string.") try: saved = self._databases self._databases = ObjectTypeList(basestring, "string") self._databases.extend(value) except Exception, e: self._databases = saved raise e def _getDatabases(self): """ Property target used to get the databases list. """ return self._databases user = property(_getUser, _setUser, None, "User to execute backup as.") password = property(_getPassword, _setPassword, None, "Password associated with user.") compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit MySQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, mysql, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._mysql = None self.mysql = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.mysql) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.mysql != other.mysql: if self.mysql < other.mysql: return -1 else: return 1 return 0 def _setMysql(self, value): """ Property target used to set the mysql configuration value. If not C{None}, the value must be a C{MysqlConfig} object. @raise ValueError: If the value is not a C{MysqlConfig} """ if value is None: self._mysql = None else: if not isinstance(value, MysqlConfig): raise ValueError("Value must be a C{MysqlConfig} object.") self._mysql = value def _getMysql(self): """ Property target used to get the mysql configuration value. """ return self._mysql mysql = property(_getMysql, _setMysql, None, "Mysql configuration in terms of a C{MysqlConfig} object.") def validate(self): """ Validates configuration represented by the object. The compress mode must be filled in. Then, if the 'all' flag I{is} set, no databases are allowed, and if the 'all' flag is I{not} set, at least one database is required. @raise ValueError: If one of the validations fails. """ if self.mysql is None: raise ValueError("Mysql section is required.") if self.mysql.compressMode is None: raise ValueError("Compress mode value is required.") if self.mysql.all: if self.mysql.databases is not None and self.mysql.databases != []: raise ValueError("Databases cannot be specified if 'all' flag is set.") else: if self.mysql.databases is None or len(self.mysql.databases) < 1: raise ValueError("At least one MySQL database must be indicated if 'all' flag is not set.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: user //cb_config/mysql/user password //cb_config/mysql/password compressMode //cb_config/mysql/compress_mode all //cb_config/mysql/all We also add groups of the following items, one list element per item:: database //cb_config/mysql/database @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.mysql is not None: sectionNode = addContainerNode(xmlDom, parentNode, "mysql") addStringNode(xmlDom, sectionNode, "user", self.mysql.user) addStringNode(xmlDom, sectionNode, "password", self.mysql.password) addStringNode(xmlDom, sectionNode, "compress_mode", self.mysql.compressMode) addBooleanNode(xmlDom, sectionNode, "all", self.mysql.all) if self.mysql.databases is not None: for database in self.mysql.databases: addStringNode(xmlDom, sectionNode, "database", database) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the mysql configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._mysql = LocalConfig._parseMysql(parentNode) @staticmethod def _parseMysql(parentNode): """ Parses a mysql configuration section. We read the following fields:: user //cb_config/mysql/user password //cb_config/mysql/password compressMode //cb_config/mysql/compress_mode all //cb_config/mysql/all We also read groups of the following item, one list element per item:: databases //cb_config/mysql/database @param parentNode: Parent node to search beneath. @return: C{MysqlConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ mysql = None section = readFirstChild(parentNode, "mysql") if section is not None: mysql = MysqlConfig() mysql.user = readString(section, "user") mysql.password = readString(section, "password") mysql.compressMode = readString(section, "compress_mode") mysql.all = readBoolean(section, "all") mysql.databases = readStringList(section, "database") return mysql ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the MySQL backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If a backup could not be written for some reason. """ logger.debug("Executing MySQL extended action.") if config.options is None or config.collect is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if local.mysql.all: logger.info("Backing up all databases.") _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, config.options.backupUser, config.options.backupGroup, None) else: logger.debug("Backing up %d individual databases.", len(local.mysql.databases)) for database in local.mysql.databases: logger.info("Backing up database [%s].", database) _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, config.options.backupUser, config.options.backupGroup, database) logger.info("Executed the MySQL extended action successfully.") def _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None): """ Backs up an individual MySQL database, or all databases. This internal method wraps the public method and adds some functionality, like figuring out a filename, etc. @param targetDir: Directory into which backups should be written. @param compressMode: Compress mode to be used for backed-up files. @param user: User to use for connecting to the database (if any). @param password: Password associated with user (if any). @param backupUser: User to own resulting file. @param backupGroup: Group to own resulting file. @param database: Name of database, or C{None} for all databases. @return: Name of the generated backup file. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the MySQL dump. """ (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) try: backupDatabase(user, password, outputFile, database) finally: outputFile.close() if not os.path.exists(filename): raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) changeOwnership(filename, backupUser, backupGroup) # pylint: disable=R0204 def _getOutputFile(targetDir, database, compressMode): """ Opens the output file used for saving the MySQL dump. The filename is either C{"mysqldump.txt"} or C{"mysqldump-.txt"}. The C{".bz2"} extension is added if C{compress} is C{True}. @param targetDir: Target directory to write file in. @param database: Name of the database (if any) @param compressMode: Compress mode to be used for backed-up files. @return: Tuple of (Output file object, filename) """ if database is None: filename = os.path.join(targetDir, "mysqldump.txt") else: filename = os.path.join(targetDir, "mysqldump-%s.txt" % database) if compressMode == "gzip": filename = "%s.gz" % filename outputFile = GzipFile(filename, "w") elif compressMode == "bzip2": filename = "%s.bz2" % filename outputFile = BZ2File(filename, "w") else: outputFile = open(filename, "w") logger.debug("MySQL dump file will be [%s].", filename) return (outputFile, filename) ############################ # backupDatabase() function ############################ def backupDatabase(user, password, backupFile, database=None): """ Backs up an individual MySQL database, or all databases. This function backs up either a named local MySQL database or all local MySQL databases, using the passed-in user and password (if provided) for connectivity. This function call I{always} results a full backup. There is no facility for incremental backups. The backup data will be written into the passed-in backup file. Normally, this would be an object as returned from C{open()}, but it is possible to use something like a C{GzipFile} to write compressed output. The caller is responsible for closing the passed-in backup file. Often, the "root" database user will be used when backing up all databases. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) all of the databases that will be backed up. This function accepts a username and password. However, you probably do not want to pass those values in. This is because they will be provided to C{mysqldump} via the command-line C{--user} and C{--password} switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, this would be done by putting a stanza like this in C{/root/.my.cnf}, to provide C{mysqldump} with the root database username and its password:: [mysqldump] user = root password = If you are executing this function as some system user other than root, then the C{.my.cnf} file would be placed in the home directory of that user. In either case, make sure to set restrictive permissions (typically, mode C{0600}) on C{.my.cnf} to make sure that other users cannot read the file. @param user: User to use for connecting to the database (if any) @type user: String representing MySQL username, or C{None} @param password: Password associated with user (if any) @type password: String representing MySQL password, or C{None} @param backupFile: File use for writing backup. @type backupFile: Python file object as from C{open()} or C{file()}. @param database: Name of the database to be backed up. @type database: String representing database name, or C{None} for all databases. @raise ValueError: If some value is missing or invalid. @raise IOError: If there is a problem executing the MySQL dump. """ args = [ "-all", "--flush-logs", "--opt", ] if user is not None: logger.warn("Warning: MySQL username will be visible in process listing (consider using ~/.my.cnf).") args.append("--user=%s" % user) if password is not None: logger.warn("Warning: MySQL password will be visible in process listing (consider using ~/.my.cnf).") args.append("--password=%s" % password) if database is None: args.insert(0, "--all-databases") else: args.insert(0, "--databases") args.append(database) command = resolveCommand(MYSQLDUMP_COMMAND) result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] if result != 0: if database is None: raise IOError("Error [%d] executing MySQL database dump for all databases." % result) else: raise IOError("Error [%d] executing MySQL database dump for database [%s]." % (result, database)) CedarBackup2-2.26.5/CedarBackup2/extend/capacity.py0000664000175000017500000004637412560176173023470 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides an extension to check remaining media capacity. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to check remaining media capacity. Some users have asked for advance warning that their media is beginning to fill up. This is an extension that checks the current capacity of the media in the writer, and prints a warning if the media is more than X% full, or has fewer than X bytes of capacity remaining. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging # Cedar Backup modules from CedarBackup2.util import displayBytes from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode from CedarBackup2.xmlutil import readFirstChild, readString from CedarBackup2.actions.util import createWriter, checkMediaState ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.capacity") ######################################################################## # Percentage class definition ######################################################################## class PercentageQuantity(object): """ Class representing a percentage quantity. The percentage is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.) Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative percentage in this context. @sort: __init__, __repr__, __str__, __cmp__, quantity """ def __init__(self, quantity=None): """ Constructor for the C{PercentageQuantity} class. @param quantity: Percentage quantity, as a string (i.e. "99.9" or "12") @raise ValueError: If the quantity value is invaid. """ self._quantity = None self.quantity = quantity def __repr__(self): """ Official string representation for class instance. """ return "PercentageQuantity(%s)" % (self.quantity) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.quantity != other.quantity: if self.quantity < other.quantity: return -1 else: return 1 return 0 def _setQuantity(self, value): """ Property target used to set the quantity The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value is not a valid floating point number @raise ValueError: If the value is less than zero """ if value is not None: if len(value) < 1: raise ValueError("Percentage must be a non-empty string.") floatValue = float(value) if floatValue < 0.0 or floatValue > 100.0: raise ValueError("Percentage must be a positive value from 0.0 to 100.0") self._quantity = value # keep around string def _getQuantity(self): """ Property target used to get the quantity. """ return self._quantity def _getPercentage(self): """ Property target used to get the quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned. """ if self.quantity is not None: return float(self.quantity) return 0.0 quantity = property(_getQuantity, _setQuantity, None, doc="Percentage value, as a string") percentage = property(_getPercentage, None, None, "Percentage value, as a floating point number.") ######################################################################## # CapacityConfig class definition ######################################################################## class CapacityConfig(object): """ Class representing capacity configuration. The following restrictions exist on data in this class: - The maximum percentage utilized must be a PercentageQuantity - The minimum bytes remaining must be a ByteQuantity @sort: __init__, __repr__, __str__, __cmp__, maxPercentage, minBytes """ def __init__(self, maxPercentage=None, minBytes=None): """ Constructor for the C{CapacityConfig} class. @param maxPercentage: Maximum percentage of the media that may be utilized @param minBytes: Minimum number of free bytes that must be available """ self._maxPercentage = None self._minBytes = None self.maxPercentage = maxPercentage self.minBytes = minBytes def __repr__(self): """ Official string representation for class instance. """ return "CapacityConfig(%s, %s)" % (self.maxPercentage, self.minBytes) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.maxPercentage != other.maxPercentage: if self.maxPercentage < other.maxPercentage: return -1 else: return 1 if self.minBytes != other.minBytes: if self.minBytes < other.minBytes: return -1 else: return 1 return 0 def _setMaxPercentage(self, value): """ Property target used to set the maxPercentage value. If not C{None}, the value must be a C{PercentageQuantity} object. @raise ValueError: If the value is not a C{PercentageQuantity} """ if value is None: self._maxPercentage = None else: if not isinstance(value, PercentageQuantity): raise ValueError("Value must be a C{PercentageQuantity} object.") self._maxPercentage = value def _getMaxPercentage(self): """ Property target used to get the maxPercentage value """ return self._maxPercentage def _setMinBytes(self, value): """ Property target used to set the bytes utilized value. If not C{None}, the value must be a C{ByteQuantity} object. @raise ValueError: If the value is not a C{ByteQuantity} """ if value is None: self._minBytes = None else: if not isinstance(value, ByteQuantity): raise ValueError("Value must be a C{ByteQuantity} object.") self._minBytes = value def _getMinBytes(self): """ Property target used to get the bytes remaining value. """ return self._minBytes maxPercentage = property(_getMaxPercentage, _setMaxPercentage, None, "Maximum percentage of the media that may be utilized.") minBytes = property(_getMinBytes, _setMinBytes, None, "Minimum number of free bytes that must be available.") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit specific configuration values to this extension. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, capacity, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._capacity = None self.capacity = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.capacity) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.capacity != other.capacity: if self.capacity < other.capacity: return -1 else: return 1 return 0 def _setCapacity(self, value): """ Property target used to set the capacity configuration value. If not C{None}, the value must be a C{CapacityConfig} object. @raise ValueError: If the value is not a C{CapacityConfig} """ if value is None: self._capacity = None else: if not isinstance(value, CapacityConfig): raise ValueError("Value must be a C{CapacityConfig} object.") self._capacity = value def _getCapacity(self): """ Property target used to get the capacity configuration value. """ return self._capacity capacity = property(_getCapacity, _setCapacity, None, "Capacity configuration in terms of a C{CapacityConfig} object.") def validate(self): """ Validates configuration represented by the object. THere must be either a percentage, or a byte capacity, but not both. @raise ValueError: If one of the validations fails. """ if self.capacity is None: raise ValueError("Capacity section is required.") if self.capacity.maxPercentage is None and self.capacity.minBytes is None: raise ValueError("Must provide either max percentage or min bytes.") if self.capacity.maxPercentage is not None and self.capacity.minBytes is not None: raise ValueError("Must provide either max percentage or min bytes, but not both.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: maxPercentage //cb_config/capacity/max_percentage minBytes //cb_config/capacity/min_bytes @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.capacity is not None: sectionNode = addContainerNode(xmlDom, parentNode, "capacity") LocalConfig._addPercentageQuantity(xmlDom, sectionNode, "max_percentage", self.capacity.maxPercentage) if self.capacity.minBytes is not None: # because utility function fills in empty section on None addByteQuantityNode(xmlDom, sectionNode, "min_bytes", self.capacity.minBytes) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the capacity configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._capacity = LocalConfig._parseCapacity(parentNode) @staticmethod def _parseCapacity(parentNode): """ Parses a capacity configuration section. We read the following fields:: maxPercentage //cb_config/capacity/max_percentage minBytes //cb_config/capacity/min_bytes @param parentNode: Parent node to search beneath. @return: C{CapacityConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ capacity = None section = readFirstChild(parentNode, "capacity") if section is not None: capacity = CapacityConfig() capacity.maxPercentage = LocalConfig._readPercentageQuantity(section, "max_percentage") capacity.minBytes = readByteQuantity(section, "min_bytes") return capacity @staticmethod def _readPercentageQuantity(parent, name): """ Read a percentage quantity value from an XML document. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: Percentage quantity parsed from XML document """ quantity = readString(parent, name) if quantity is None: return None return PercentageQuantity(quantity) @staticmethod def _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity): """ Adds a text node as the next child of a parent, to contain a percentage quantity. If the C{percentageQuantity} is None, then no node will be created. @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param percentageQuantity: PercentageQuantity object to put into the XML document @return: Reference to the newly-created node. """ if percentageQuantity is not None: addStringNode(xmlDom, parentNode, nodeName, percentageQuantity.quantity) ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the capacity action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing capacity extended action.") if config.options is None or config.store is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) if config.store.checkMedia: checkMediaState(config.store) # raises exception if media is not initialized capacity = createWriter(config).retrieveCapacity() logger.debug("Media capacity: %s", capacity) if local.capacity.maxPercentage is not None: if capacity.utilized > local.capacity.maxPercentage.percentage: logger.error("Media has reached capacity limit of %s%%: %.2f%% utilized", local.capacity.maxPercentage.quantity, capacity.utilized) else: if capacity.bytesAvailable < local.capacity.minBytes: logger.error("Media has reached capacity limit of %s: only %s available", local.capacity.minBytes, displayBytes(capacity.bytesAvailable)) logger.info("Executed the capacity extended action successfully.") CedarBackup2-2.26.5/CedarBackup2/extend/amazons3.py0000664000175000017500000010177512642020136023407 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2014-2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : "Store" type extension that writes data to Amazon S3. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Store-type extension that writes data to Amazon S3. This extension requires a new configuration section and is intended to be run immediately after the standard stage action, replacing the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. Since it is intended to replace the store action, it does not rely on any store configuration. The underlying functionality relies on the U{AWS CLI interface }. Before you use this extension, you need to set up your Amazon S3 account and configure the AWS CLI connection per Amazon's documentation. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to communicate with AWS. So, make sure you configure AWS CLI as the backup user and not root. You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the C{${input}} and C{${output}} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user. For instance, you can use something like this with GPG:: /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input} The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.:: dd if=/dev/urandom count=20 bs=1 | xxd -ps (See U{StackExchange } for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user. This extension was written for and tested on Linux. It will throw an exception if run on Windows. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import sys import os import logging import tempfile import datetime import json import shutil # Cedar Backup modules from CedarBackup2.filesystem import FilesystemList, BackupFileList from CedarBackup2.util import resolveCommand, executeCommand, isRunningAsRoot, changeOwnership, isStartOfWeek from CedarBackup2.util import displayBytes, UNIT_BYTES from CedarBackup2.xmlutil import createInputDom, addContainerNode, addBooleanNode, addStringNode from CedarBackup2.xmlutil import readFirstChild, readString, readBoolean from CedarBackup2.actions.util import writeIndicatorFile from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.amazons3") SU_COMMAND = [ "su" ] AWS_COMMAND = [ "aws" ] STORE_INDICATOR = "cback.amazons3" ######################################################################## # AmazonS3Config class definition ######################################################################## class AmazonS3Config(object): """ Class representing Amazon S3 configuration. Amazon S3 configuration is used for storing backup data in Amazon's S3 cloud storage using the C{s3cmd} tool. The following restrictions exist on data in this class: - The s3Bucket value must be a non-empty string - The encryptCommand value, if set, must be a non-empty string - The full backup size limit, if set, must be a ByteQuantity >= 0 - The incremental backup size limit, if set, must be a ByteQuantity >= 0 @sort: __init__, __repr__, __str__, __cmp__, warnMidnite, s3Bucket """ def __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None, fullBackupSizeLimit=None, incrementalBackupSizeLimit=None): """ Constructor for the C{AmazonS3Config} class. @param warnMidnite: Whether to generate warnings for crossing midnite. @param s3Bucket: Name of the Amazon S3 bucket in which to store the data @param encryptCommand: Command used to encrypt backup data before upload to S3 @param fullBackupSizeLimit: Maximum size of a full backup, a ByteQuantity @param incrementalBackupSizeLimit: Maximum size of an incremental backup, a ByteQuantity @raise ValueError: If one of the values is invalid. """ self._warnMidnite = None self._s3Bucket = None self._encryptCommand = None self._fullBackupSizeLimit = None self._incrementalBackupSizeLimit = None self.warnMidnite = warnMidnite self.s3Bucket = s3Bucket self.encryptCommand = encryptCommand self.fullBackupSizeLimit = fullBackupSizeLimit self.incrementalBackupSizeLimit = incrementalBackupSizeLimit def __repr__(self): """ Official string representation for class instance. """ return "AmazonS3Config(%s, %s, %s, %s, %s)" % (self.warnMidnite, self.s3Bucket, self.encryptCommand, self.fullBackupSizeLimit, self.incrementalBackupSizeLimit) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.warnMidnite != other.warnMidnite: if self.warnMidnite < other.warnMidnite: return -1 else: return 1 if self.s3Bucket != other.s3Bucket: if self.s3Bucket < other.s3Bucket: return -1 else: return 1 if self.encryptCommand != other.encryptCommand: if self.encryptCommand < other.encryptCommand: return -1 else: return 1 if self.fullBackupSizeLimit != other.fullBackupSizeLimit: if self.fullBackupSizeLimit < other.fullBackupSizeLimit: return -1 else: return 1 if self.incrementalBackupSizeLimit != other.incrementalBackupSizeLimit: if self.incrementalBackupSizeLimit < other.incrementalBackupSizeLimit: return -1 else: return 1 return 0 def _setWarnMidnite(self, value): """ Property target used to set the midnite warning flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._warnMidnite = True else: self._warnMidnite = False def _getWarnMidnite(self): """ Property target used to get the midnite warning flag. """ return self._warnMidnite def _setS3Bucket(self, value): """ Property target used to set the S3 bucket. """ if value is not None: if len(value) < 1: raise ValueError("S3 bucket must be non-empty string.") self._s3Bucket = value def _getS3Bucket(self): """ Property target used to get the S3 bucket. """ return self._s3Bucket def _setEncryptCommand(self, value): """ Property target used to set the encrypt command. """ if value is not None: if len(value) < 1: raise ValueError("Encrypt command must be non-empty string.") self._encryptCommand = value def _getEncryptCommand(self): """ Property target used to get the encrypt command. """ return self._encryptCommand def _setFullBackupSizeLimit(self, value): """ Property target used to set the full backup size limit. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._fullBackupSizeLimit = None else: if isinstance(value, ByteQuantity): self._fullBackupSizeLimit = value else: self._fullBackupSizeLimit = ByteQuantity(value, UNIT_BYTES) def _getFullBackupSizeLimit(self): """ Property target used to get the full backup size limit. """ return self._fullBackupSizeLimit def _setIncrementalBackupSizeLimit(self, value): """ Property target used to set the incremental backup size limit. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._incrementalBackupSizeLimit = None else: if isinstance(value, ByteQuantity): self._incrementalBackupSizeLimit = value else: self._incrementalBackupSizeLimit = ByteQuantity(value, UNIT_BYTES) def _getIncrementalBackupSizeLimit(self): """ Property target used to get the incremental backup size limit. """ return self._incrementalBackupSizeLimit warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") s3Bucket = property(_getS3Bucket, _setS3Bucket, None, doc="Amazon S3 Bucket in which to store data") encryptCommand = property(_getEncryptCommand, _setEncryptCommand, None, doc="Command used to encrypt data before upload to S3") fullBackupSizeLimit = property(_getFullBackupSizeLimit, _setFullBackupSizeLimit, None, doc="Maximum size of a full backup, as a ByteQuantity") incrementalBackupSizeLimit = property(_getIncrementalBackupSizeLimit, _setIncrementalBackupSizeLimit, None, doc="Maximum size of an incremental backup, as a ByteQuantity") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit amazons3-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, amazons3, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._amazons3 = None self.amazons3 = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.amazons3) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.amazons3 != other.amazons3: if self.amazons3 < other.amazons3: return -1 else: return 1 return 0 def _setAmazonS3(self, value): """ Property target used to set the amazons3 configuration value. If not C{None}, the value must be a C{AmazonS3Config} object. @raise ValueError: If the value is not a C{AmazonS3Config} """ if value is None: self._amazons3 = None else: if not isinstance(value, AmazonS3Config): raise ValueError("Value must be a C{AmazonS3Config} object.") self._amazons3 = value def _getAmazonS3(self): """ Property target used to get the amazons3 configuration value. """ return self._amazons3 amazons3 = property(_getAmazonS3, _setAmazonS3, None, "AmazonS3 configuration in terms of a C{AmazonS3Config} object.") def validate(self): """ Validates configuration represented by the object. AmazonS3 configuration must be filled in. Within that, the s3Bucket target must be filled in @raise ValueError: If one of the validations fails. """ if self.amazons3 is None: raise ValueError("AmazonS3 section is required.") if self.amazons3.s3Bucket is None: raise ValueError("AmazonS3 s3Bucket must be set.") def addConfig(self, xmlDom, parentNode): """ Adds an configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: warnMidnite //cb_config/amazons3/warn_midnite s3Bucket //cb_config/amazons3/s3_bucket encryptCommand //cb_config/amazons3/encrypt fullBackupSizeLimit //cb_config/amazons3/full_size_limit incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.amazons3 is not None: sectionNode = addContainerNode(xmlDom, parentNode, "amazons3") addBooleanNode(xmlDom, sectionNode, "warn_midnite", self.amazons3.warnMidnite) addStringNode(xmlDom, sectionNode, "s3_bucket", self.amazons3.s3Bucket) addStringNode(xmlDom, sectionNode, "encrypt", self.amazons3.encryptCommand) addByteQuantityNode(xmlDom, sectionNode, "full_size_limit", self.amazons3.fullBackupSizeLimit) addByteQuantityNode(xmlDom, sectionNode, "incr_size_limit", self.amazons3.incrementalBackupSizeLimit) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the amazons3 configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._amazons3 = LocalConfig._parseAmazonS3(parentNode) @staticmethod def _parseAmazonS3(parent): """ Parses an amazons3 configuration section. We read the following individual fields:: warnMidnite //cb_config/amazons3/warn_midnite s3Bucket //cb_config/amazons3/s3_bucket encryptCommand //cb_config/amazons3/encrypt fullBackupSizeLimit //cb_config/amazons3/full_size_limit incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit @param parent: Parent node to search beneath. @return: C{AmazonS3Config} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ amazons3 = None section = readFirstChild(parent, "amazons3") if section is not None: amazons3 = AmazonS3Config() amazons3.warnMidnite = readBoolean(section, "warn_midnite") amazons3.s3Bucket = readString(section, "s3_bucket") amazons3.encryptCommand = readString(section, "encrypt") amazons3.fullBackupSizeLimit = readByteQuantity(section, "full_size_limit") amazons3.incrementalBackupSizeLimit = readByteQuantity(section, "incr_size_limit") return amazons3 ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the amazons3 backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing amazons3 extended action.") if not isRunningAsRoot(): logger.error("Error: the amazons3 extended action must be run as root.") raise ValueError("The amazons3 extended action must be run as root.") if sys.platform == "win32": logger.error("Error: the amazons3 extended action is not supported on Windows.") raise ValueError("The amazons3 extended action is not supported on Windows.") if config.options is None or config.stage is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) stagingDirs = _findCorrectDailyDir(options, config, local) _applySizeLimits(options, config, local, stagingDirs) _writeToAmazonS3(config, local, stagingDirs) _writeStoreIndicator(config, stagingDirs) logger.info("Executed the amazons3 extended action successfully.") ######################################################################## # Private utility functions ######################################################################## ######################### # _findCorrectDailyDir() ######################### def _findCorrectDailyDir(options, config, local): """ Finds the correct daily staging directory to be written to Amazon S3. This is substantially similar to the same function in store.py. The main difference is that it doesn't rely on store configuration at all. @param options: Options object. @param config: Config object. @param local: Local config object. @return: Correct staging dir, as a dict mapping directory to date suffix. @raise IOError: If the staging directory cannot be found. """ oneDay = datetime.timedelta(days=1) today = datetime.date.today() yesterday = today - oneDay tomorrow = today + oneDay todayDate = today.strftime(DIR_TIME_FORMAT) yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) todayPath = os.path.join(config.stage.targetDir, todayDate) yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) if options.full: if os.path.isdir(todayPath) and os.path.exists(todayStageInd): logger.info("Amazon S3 process will use current day's staging directory [%s]", todayPath) return { todayPath:todayDate } raise IOError("Unable to find staging directory to process (only tried today due to full option).") else: if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): logger.info("Amazon S3 process will use current day's staging directory [%s]", todayPath) return { todayPath:todayDate } elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): logger.info("Amazon S3 process will use previous day's staging directory [%s]", yesterdayPath) if local.amazons3.warnMidnite: logger.warn("Warning: Amazon S3 process crossed midnite boundary to find data.") return { yesterdayPath:yesterdayDate } elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): logger.info("Amazon S3 process will use next day's staging directory [%s]", tomorrowPath) if local.amazons3.warnMidnite: logger.warn("Warning: Amazon S3 process crossed midnite boundary to find data.") return { tomorrowPath:tomorrowDate } raise IOError("Unable to find unused staging directory to process (tried today, yesterday, tomorrow).") ############################## # _applySizeLimits() function ############################## def _applySizeLimits(options, config, local, stagingDirs): """ Apply size limits, throwing an exception if any limits are exceeded. Size limits are optional. If a limit is set to None, it does not apply. The full size limit applies if the full option is set or if today is the start of the week. The incremental size limit applies otherwise. Limits are applied to the total size of all the relevant staging directories. @param options: Options object. @param config: Config object. @param local: Local config object. @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise ValueError: If a size limit has been exceeded """ if options.full or isStartOfWeek(config.options.startingDay): logger.debug("Using Amazon S3 size limit for full backups.") limit = local.amazons3.fullBackupSizeLimit else: logger.debug("Using Amazon S3 size limit for incremental backups.") limit = local.amazons3.incrementalBackupSizeLimit if limit is None: logger.debug("No Amazon S3 size limit will be applied.") else: logger.debug("Amazon S3 size limit is: %s", limit) contents = BackupFileList() for stagingDir in stagingDirs: contents.addDirContents(stagingDir) total = contents.totalSize() logger.debug("Amazon S3 backup size is: %s", displayBytes(total)) if total > limit.bytes: logger.error("Amazon S3 size limit exceeded: %s > %s", displayBytes(total), limit) raise ValueError("Amazon S3 size limit exceeded: %s > %s" % (displayBytes(total), limit)) else: logger.info("Total size does not exceed Amazon S3 size limit, so backup can continue.") ############################## # _writeToAmazonS3() function ############################## def _writeToAmazonS3(config, local, stagingDirs): """ Writes the indicated staging directories to an Amazon S3 bucket. Each of the staging directories listed in C{stagingDirs} will be written to the configured Amazon S3 bucket from local configuration. The directories will be placed into the image at the root by date, so staging directory C{/opt/stage/2005/02/10} will be placed into the S3 bucket at C{/2005/02/10}. If an encrypt commmand is provided, the files will be encrypted first. @param config: Config object. @param local: Local config object. @param stagingDirs: Dictionary mapping directory path to date suffix. @raise ValueError: Under many generic error conditions @raise IOError: If there is a problem writing to Amazon S3 """ for stagingDir in stagingDirs.keys(): logger.debug("Storing stage directory to Amazon S3 [%s].", stagingDir) dateSuffix = stagingDirs[stagingDir] s3BucketUrl = "s3://%s/%s" % (local.amazons3.s3Bucket, dateSuffix) logger.debug("S3 bucket URL is [%s]", s3BucketUrl) _clearExistingBackup(config, s3BucketUrl) if local.amazons3.encryptCommand is None: logger.debug("Encryption is disabled; files will be uploaded in cleartext.") _uploadStagingDir(config, stagingDir, s3BucketUrl) _verifyUpload(config, stagingDir, s3BucketUrl) else: logger.debug("Encryption is enabled; files will be uploaded after being encrypted.") encryptedDir = tempfile.mkdtemp(dir=config.options.workingDir) changeOwnership(encryptedDir, config.options.backupUser, config.options.backupGroup) try: _encryptStagingDir(config, local, stagingDir, encryptedDir) _uploadStagingDir(config, encryptedDir, s3BucketUrl) _verifyUpload(config, encryptedDir, s3BucketUrl) finally: if os.path.exists(encryptedDir): shutil.rmtree(encryptedDir) ################################## # _writeStoreIndicator() function ################################## def _writeStoreIndicator(config, stagingDirs): """ Writes a store indicator file into staging directories. @param config: Config object. @param stagingDirs: Dictionary mapping directory path to date suffix. """ for stagingDir in stagingDirs.keys(): writeIndicatorFile(stagingDir, STORE_INDICATOR, config.options.backupUser, config.options.backupGroup) ################################## # _clearExistingBackup() function ################################## def _clearExistingBackup(config, s3BucketUrl): """ Clear any existing backup files for an S3 bucket URL. @param config: Config object. @param s3BucketUrl: S3 bucket URL associated with the staging directory """ suCommand = resolveCommand(SU_COMMAND) awsCommand = resolveCommand(AWS_COMMAND) actualCommand = "%s s3 rm --recursive %s/" % (awsCommand[0], s3BucketUrl) result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error [%d] calling AWS CLI to clear existing backup for [%s]." % (result, s3BucketUrl)) logger.debug("Completed clearing any existing backup in S3 for [%s]", s3BucketUrl) ############################### # _uploadStagingDir() function ############################### def _uploadStagingDir(config, stagingDir, s3BucketUrl): """ Upload the contents of a staging directory out to the Amazon S3 cloud. @param config: Config object. @param stagingDir: Staging directory to upload @param s3BucketUrl: S3 bucket URL associated with the staging directory """ suCommand = resolveCommand(SU_COMMAND) awsCommand = resolveCommand(AWS_COMMAND) actualCommand = "%s s3 cp --recursive %s/ %s/" % (awsCommand[0], stagingDir, s3BucketUrl) result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error [%d] calling AWS CLI to upload staging directory to [%s]." % (result, s3BucketUrl)) logger.debug("Completed uploading staging dir [%s] to [%s]", stagingDir, s3BucketUrl) ########################### # _verifyUpload() function ########################### def _verifyUpload(config, stagingDir, s3BucketUrl): """ Verify that a staging directory was properly uploaded to the Amazon S3 cloud. @param config: Config object. @param stagingDir: Staging directory to verify @param s3BucketUrl: S3 bucket URL associated with the staging directory """ (bucket, prefix) = s3BucketUrl.replace("s3://", "").split("/", 1) suCommand = resolveCommand(SU_COMMAND) awsCommand = resolveCommand(AWS_COMMAND) query = "Contents[].{Key: Key, Size: Size}" actualCommand = "%s s3api list-objects --bucket %s --prefix %s --query '%s'" % (awsCommand[0], bucket, prefix, query) (result, data) = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand], returnOutput=True) if result != 0: raise IOError("Error [%d] calling AWS CLI verify upload to [%s]." % (result, s3BucketUrl)) contents = { } for entry in json.loads("".join(data)): key = entry["Key"].replace(prefix, "") size = long(entry["Size"]) contents[key] = size files = FilesystemList() files.addDirContents(stagingDir) for entry in files: if os.path.isfile(entry): key = entry.replace(stagingDir, "") size = long(os.stat(entry).st_size) if not key in contents: raise IOError("File was apparently not uploaded: [%s]" % entry) else: if size != contents[key]: raise IOError("File size differs [%s], expected %s bytes but got %s bytes" % (entry, size, contents[key])) logger.debug("Completed verifying upload from [%s] to [%s].", stagingDir, s3BucketUrl) ################################ # _encryptStagingDir() function ################################ def _encryptStagingDir(config, local, stagingDir, encryptedDir): """ Encrypt a staging directory, creating a new directory in the process. @param config: Config object. @param stagingDir: Staging directory to use as source @param encryptedDir: Target directory into which encrypted files should be written """ suCommand = resolveCommand(SU_COMMAND) files = FilesystemList() files.addDirContents(stagingDir) for cleartext in files: if os.path.isfile(cleartext): encrypted = "%s%s" % (encryptedDir, cleartext.replace(stagingDir, "")) if long(os.stat(cleartext).st_size) == 0: open(encrypted, 'a').close() # don't bother encrypting empty files else: actualCommand = local.amazons3.encryptCommand.replace("${input}", cleartext).replace("${output}", encrypted) subdir = os.path.dirname(encrypted) if not os.path.isdir(subdir): os.makedirs(subdir) changeOwnership(subdir, config.options.backupUser, config.options.backupGroup) result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] if result != 0: raise IOError("Error [%d] encrypting [%s]." % (result, cleartext)) logger.debug("Completed encrypting staging directory [%s] into [%s]", stagingDir, encryptedDir) CedarBackup2-2.26.5/CedarBackup2/extend/split.py0000664000175000017500000004401712560176105023011 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010,2013 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Extensions # Purpose : Provides an extension to split up large files in staging directories. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides an extension to split up large files in staging directories. When this extension is executed, it will look through the configured Cedar Backup staging directory for files exceeding a specified size limit, and split them down into smaller files using the 'split' utility. Any directory which has already been split (as indicated by the C{cback.split} file) will be ignored. This extension requires a new configuration section and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging # Cedar Backup modules from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership from CedarBackup2.xmlutil import createInputDom, addContainerNode from CedarBackup2.xmlutil import readFirstChild from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.extend.split") SPLIT_COMMAND = [ "split", ] SPLIT_INDICATOR = "cback.split" ######################################################################## # SplitConfig class definition ######################################################################## class SplitConfig(object): """ Class representing split configuration. Split configuration is used for splitting staging directories. The following restrictions exist on data in this class: - The size limit must be a ByteQuantity - The split size must be a ByteQuantity @sort: __init__, __repr__, __str__, __cmp__, sizeLimit, splitSize """ def __init__(self, sizeLimit=None, splitSize=None): """ Constructor for the C{SplitCOnfig} class. @param sizeLimit: Size limit of the files, in bytes @param splitSize: Size that files exceeding the limit will be split into, in bytes @raise ValueError: If one of the values is invalid. """ self._sizeLimit = None self._splitSize = None self.sizeLimit = sizeLimit self.splitSize = splitSize def __repr__(self): """ Official string representation for class instance. """ return "SplitConfig(%s, %s)" % (self.sizeLimit, self.splitSize) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.sizeLimit != other.sizeLimit: if self.sizeLimit < other.sizeLimit: return -1 else: return 1 if self.splitSize != other.splitSize: if self.splitSize < other.splitSize: return -1 else: return 1 return 0 def _setSizeLimit(self, value): """ Property target used to set the size limit. If not C{None}, the value must be a C{ByteQuantity} object. @raise ValueError: If the value is not a C{ByteQuantity} """ if value is None: self._sizeLimit = None else: if not isinstance(value, ByteQuantity): raise ValueError("Value must be a C{ByteQuantity} object.") self._sizeLimit = value def _getSizeLimit(self): """ Property target used to get the size limit. """ return self._sizeLimit def _setSplitSize(self, value): """ Property target used to set the split size. If not C{None}, the value must be a C{ByteQuantity} object. @raise ValueError: If the value is not a C{ByteQuantity} """ if value is None: self._splitSize = None else: if not isinstance(value, ByteQuantity): raise ValueError("Value must be a C{ByteQuantity} object.") self._splitSize = value def _getSplitSize(self): """ Property target used to get the split size. """ return self._splitSize sizeLimit = property(_getSizeLimit, _setSizeLimit, None, doc="Size limit, as a ByteQuantity") splitSize = property(_getSplitSize, _setSplitSize, None, doc="Split size, as a ByteQuantity") ######################################################################## # LocalConfig class definition ######################################################################## class LocalConfig(object): """ Class representing this extension's configuration document. This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit split-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, C{validate} and C{addConfig} methods. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, split, validate, addConfig """ def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath} then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._split = None self.split = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() def __repr__(self): """ Official string representation for class instance. """ return "LocalConfig(%s)" % (self.split) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.split != other.split: if self.split < other.split: return -1 else: return 1 return 0 def _setSplit(self, value): """ Property target used to set the split configuration value. If not C{None}, the value must be a C{SplitConfig} object. @raise ValueError: If the value is not a C{SplitConfig} """ if value is None: self._split = None else: if not isinstance(value, SplitConfig): raise ValueError("Value must be a C{SplitConfig} object.") self._split = value def _getSplit(self): """ Property target used to get the split configuration value. """ return self._split split = property(_getSplit, _setSplit, None, "Split configuration in terms of a C{SplitConfig} object.") def validate(self): """ Validates configuration represented by the object. Split configuration must be filled in. Within that, both the size limit and split size must be filled in. @raise ValueError: If one of the validations fails. """ if self.split is None: raise ValueError("Split section is required.") if self.split.sizeLimit is None: raise ValueError("Size limit must be set.") if self.split.splitSize is None: raise ValueError("Split size must be set.") def addConfig(self, xmlDom, parentNode): """ Adds a configuration section as the next child of a parent. Third parties should use this function to write configuration related to this extension. We add the following fields to the document:: sizeLimit //cb_config/split/size_limit splitSize //cb_config/split/split_size @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent that the section should be appended to. """ if self.split is not None: sectionNode = addContainerNode(xmlDom, parentNode, "split") addByteQuantityNode(xmlDom, sectionNode, "size_limit", self.split.sizeLimit) addByteQuantityNode(xmlDom, sectionNode, "split_size", self.split.splitSize) def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls a static method to parse the split configuration section. @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._split = LocalConfig._parseSplit(parentNode) @staticmethod def _parseSplit(parent): """ Parses an split configuration section. We read the following individual fields:: sizeLimit //cb_config/split/size_limit splitSize //cb_config/split/split_size @param parent: Parent node to search beneath. @return: C{EncryptConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ split = None section = readFirstChild(parent, "split") if section is not None: split = SplitConfig() split.sizeLimit = readByteQuantity(section, "size_limit") split.splitSize = readByteQuantity(section, "split_size") return split ######################################################################## # Public functions ######################################################################## ########################### # executeAction() function ########################### def executeAction(configPath, options, config): """ Executes the split backup action. @param configPath: Path to configuration file on disk. @type configPath: String representing a path on disk. @param options: Program command-line options. @type options: Options object. @param config: Program configuration. @type config: Config object. @raise ValueError: Under many generic error conditions @raise IOError: If there are I/O problems reading or writing files """ logger.debug("Executing split extended action.") if config.options is None or config.stage is None: raise ValueError("Cedar Backup configuration is not properly filled in.") local = LocalConfig(xmlPath=configPath) dailyDirs = findDailyDirs(config.stage.targetDir, SPLIT_INDICATOR) for dailyDir in dailyDirs: _splitDailyDir(dailyDir, local.split.sizeLimit, local.split.splitSize, config.options.backupUser, config.options.backupGroup) writeIndicatorFile(dailyDir, SPLIT_INDICATOR, config.options.backupUser, config.options.backupGroup) logger.info("Executed the split extended action successfully.") ############################## # _splitDailyDir() function ############################## def _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup): """ Splits large files in a daily staging directory. Files that match INDICATOR_PATTERNS (i.e. C{"cback.store"}, C{"cback.stage"}, etc.) are assumed to be indicator files and are ignored. All other files are split. @param dailyDir: Daily directory to encrypt @param sizeLimit: Size limit, in bytes @param splitSize: Split size, in bytes @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @raise ValueError: If the encrypt mode is not supported. @raise ValueError: If the daily staging directory does not exist. """ logger.debug("Begin splitting contents of [%s].", dailyDir) fileList = getBackupFiles(dailyDir) # ignores indicator files for path in fileList: size = float(os.stat(path).st_size) if size > sizeLimit: _splitFile(path, splitSize, backupUser, backupGroup, removeSource=True) logger.debug("Completed splitting contents of [%s].", dailyDir) ######################## # _splitFile() function ######################## def _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False): """ Splits the source file into chunks of the indicated size. The split files will be owned by the indicated backup user and group. If C{removeSource} is C{True}, then the source file will be removed after it is successfully split. @param sourcePath: Absolute path of the source file to split @param splitSize: Encryption mode (only "gpg" is allowed) @param backupUser: User that target files should be owned by @param backupGroup: Group that target files should be owned by @param removeSource: Indicates whether to remove the source file @raise IOError: If there is a problem accessing, splitting or removing the source file. """ cwd = os.getcwd() try: if not os.path.exists(sourcePath): raise ValueError("Source path [%s] does not exist." % sourcePath) dirname = os.path.dirname(sourcePath) filename = os.path.basename(sourcePath) prefix = "%s_" % filename bytes = int(splitSize.bytes) # pylint: disable=W0622 os.chdir(dirname) # need to operate from directory that we want files written to command = resolveCommand(SPLIT_COMMAND) args = [ "--verbose", "--numeric-suffixes", "--suffix-length=5", "--bytes=%d" % bytes, filename, prefix, ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=False) if result != 0: raise IOError("Error [%d] calling split for [%s]." % (result, sourcePath)) pattern = re.compile(r"(creating file [`'])(%s)(.*)(')" % prefix) match = pattern.search(output[-1:][0]) if match is None: raise IOError("Unable to parse output from split command.") value = int(match.group(3).strip()) for index in range(0, value): path = "%s%05d" % (prefix, index) if not os.path.exists(path): raise IOError("After call to split, expected file [%s] does not exist." % path) changeOwnership(path, backupUser, backupGroup) if removeSource: if os.path.exists(sourcePath): try: os.remove(sourcePath) logger.debug("Completed removing old file [%s].", sourcePath) except: raise IOError("Failed to remove file [%s] after splitting it." % (sourcePath)) finally: os.chdir(cwd) CedarBackup2-2.26.5/CedarBackup2/action.py0000664000175000017500000000321412560016766021644 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides implementation of various backup-related actions. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides interface backwards compatibility. In Cedar Backup 2.10.0, a refactoring effort took place to reorganize the code for the standard actions. The code formerly in action.py was split into various other files in the CedarBackup2.actions package. This mostly-empty file remains to preserve the Cedar Backup library interface. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # pylint: disable=W0611 from CedarBackup2.actions.collect import executeCollect from CedarBackup2.actions.stage import executeStage from CedarBackup2.actions.store import executeStore from CedarBackup2.actions.purge import executePurge from CedarBackup2.actions.rebuild import executeRebuild from CedarBackup2.actions.validate import executeValidate CedarBackup2-2.26.5/CedarBackup2/writer.py0000664000175000017500000000301412560016766021701 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides interface backwards compatibility. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides interface backwards compatibility. In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # pylint: disable=W0611 from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed from CedarBackup2.writers.cdwriter import MediaDefinition, MediaCapacity, CdWriter from CedarBackup2.writers.cdwriter import MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 CedarBackup2-2.26.5/CedarBackup2/__init__.py0000664000175000017500000000404412560016766022130 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements local and remote backups to CD or DVD media. Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2 import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'actions', 'cli', 'config', 'extend', 'filesystem', 'knapsack', 'peer', 'release', 'tools', 'util', 'writers', ] CedarBackup2-2.26.5/CedarBackup2/filesystem.py0000664000175000017500000017225112562376524022566 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides filesystem-related objects. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides filesystem-related objects. @sort: FilesystemList, BackupFileList, PurgeItemList @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import math import logging import tarfile # Cedar Backup modules from CedarBackup2.knapsack import firstFit, bestFit, worstFit, alternateFit from CedarBackup2.util import AbsolutePathList, UnorderedList, RegexList from CedarBackup2.util import removeKeys, displayBytes, calculateFileAge, encodePath, dereferenceLink ######################################################################## # Module-wide variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.filesystem") ######################################################################## # FilesystemList class definition ######################################################################## class FilesystemList(list): ###################### # Class documentation ###################### """ Represents a list of filesystem items. This is a generic class that represents a list of filesystem items. Callers can add individual files or directories to the list, or can recursively add the contents of a directory. The class also allows for up-front exclusions in several forms (all files, all directories, all items matching a pattern, all items whose basename matches a pattern, or all directories containing a specific "ignore file"). Symbolic links are typically backed up non-recursively, i.e. the link to a directory is backed up, but not the contents of that link (we don't want to deal with recursive loops, etc.). The custom methods such as L{addFile} will only add items if they exist on the filesystem and do not match any exclusions that are already in place. However, since a FilesystemList is a subclass of Python's standard list class, callers can also add items to the list in the usual way, using methods like C{append()} or C{insert()}. No validations apply to items added to the list in this way; however, many list-manipulation methods deal "gracefully" with items that don't exist in the filesystem, often by ignoring them. Once a list has been created, callers can remove individual items from the list using standard methods like C{pop()} or C{remove()} or they can use custom methods to remove specific types of entries or entries which match a particular pattern. @note: Regular expression patterns that apply to paths are assumed to be bounded at front and back by the beginning and end of the string, i.e. they are treated as if they begin with C{^} and end with C{$}. This is true whether we are matching a complete path or a basename. @note: Some platforms, like Windows, do not support soft links. On those platforms, the ignore-soft-links flag can be set, but it won't do any good because the operating system never reports a file as a soft link. @sort: __init__, addFile, addDir, addDirContents, removeFiles, removeDirs, removeLinks, removeMatch, removeInvalid, normalize, excludeFiles, excludeDirs, excludeLinks, excludePaths, excludePatterns, excludeBasenamePatterns, ignoreFile """ ############## # Constructor ############## def __init__(self): """Initializes a list with no configured exclusions.""" list.__init__(self) self._excludeFiles = False self._excludeDirs = False self._excludeLinks = False self._excludePaths = None self._excludePatterns = None self._excludeBasenamePatterns = None self._ignoreFile = None self.excludeFiles = False self.excludeLinks = False self.excludeDirs = False self.excludePaths = [] self.excludePatterns = RegexList() self.excludeBasenamePatterns = RegexList() self.ignoreFile = None ############# # Properties ############# def _setExcludeFiles(self, value): """ Property target used to set the exclude files flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._excludeFiles = True else: self._excludeFiles = False def _getExcludeFiles(self): """ Property target used to get the exclude files flag. """ return self._excludeFiles def _setExcludeDirs(self, value): """ Property target used to set the exclude directories flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._excludeDirs = True else: self._excludeDirs = False def _getExcludeDirs(self): """ Property target used to get the exclude directories flag. """ return self._excludeDirs def _setExcludeLinks(self, value): """ Property target used to set the exclude soft links flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._excludeLinks = True else: self._excludeLinks = False def _getExcludeLinks(self): """ Property target used to get the exclude soft links flag. """ return self._excludeLinks def _setExcludePaths(self, value): """ Property target used to set the exclude paths list. A C{None} value is converted to an empty list. Elements do not have to exist on disk at the time of assignment. @raise ValueError: If any list element is not an absolute path. """ self._excludePaths = AbsolutePathList() if value is not None: self._excludePaths.extend(value) def _getExcludePaths(self): """ Property target used to get the absolute exclude paths list. """ return self._excludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. A C{None} value is converted to an empty list. """ self._excludePatterns = RegexList() if value is not None: self._excludePatterns.extend(value) def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns def _setExcludeBasenamePatterns(self, value): """ Property target used to set the exclude basename patterns list. A C{None} value is converted to an empty list. """ self._excludeBasenamePatterns = RegexList() if value is not None: self._excludeBasenamePatterns.extend(value) def _getExcludeBasenamePatterns(self): """ Property target used to get the exclude basename patterns list. """ return self._excludeBasenamePatterns def _setIgnoreFile(self, value): """ Property target used to set the ignore file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The ignore file must be a non-empty string.") self._ignoreFile = value def _getIgnoreFile(self): """ Property target used to get the ignore file. """ return self._ignoreFile excludeFiles = property(_getExcludeFiles, _setExcludeFiles, None, "Boolean indicating whether files should be excluded.") excludeDirs = property(_getExcludeDirs, _setExcludeDirs, None, "Boolean indicating whether directories should be excluded.") excludeLinks = property(_getExcludeLinks, _setExcludeLinks, None, "Boolean indicating whether soft links should be excluded.") excludePaths = property(_getExcludePaths, _setExcludePaths, None, "List of absolute paths to be excluded.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns (matching complete path) to be excluded.") excludeBasenamePatterns = property(_getExcludeBasenamePatterns, _setExcludeBasenamePatterns, None, "List of regular expression patterns (matching basename) to be excluded.") ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Name of file which will cause directory contents to be ignored.") ############## # Add methods ############## def addFile(self, path): """ Adds a file to the list. The path must exist and must be a file or a link to an existing file. It will be added to the list subject to any exclusions that are in place. @param path: File path to be added to the list @type path: String representing a path on disk @return: Number of items added to the list. @raise ValueError: If path is not a file or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) if not os.path.exists(path) or not os.path.isfile(path): logger.debug("Path [%s] is not a file or does not exist on disk.", path) raise ValueError("Path is not a file or does not exist on disk.") if self.excludeLinks and os.path.islink(path): logger.debug("Path [%s] is excluded based on excludeLinks.", path) return 0 if self.excludeFiles: logger.debug("Path [%s] is excluded based on excludeFiles.", path) return 0 if path in self.excludePaths: logger.debug("Path [%s] is excluded based on excludePaths.", path) return 0 for pattern in self.excludePatterns: pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(path): # safe to assume all are valid due to RegexList logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) return 0 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) return 0 self.append(path) logger.debug("Added file to list: [%s]", path) return 1 def addDir(self, path): """ Adds a directory to the list. The path must exist and must be a directory or a link to an existing directory. It will be added to the list subject to any exclusions that are in place. The L{ignoreFile} does not apply to this method, only to L{addDirContents}. @param path: Directory path to be added to the list @type path: String representing a path on disk @return: Number of items added to the list. @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) if not os.path.exists(path) or not os.path.isdir(path): logger.debug("Path [%s] is not a directory or does not exist on disk.", path) raise ValueError("Path is not a directory or does not exist on disk.") if self.excludeLinks and os.path.islink(path): logger.debug("Path [%s] is excluded based on excludeLinks.", path) return 0 if self.excludeDirs: logger.debug("Path [%s] is excluded based on excludeDirs.", path) return 0 if path in self.excludePaths: logger.debug("Path [%s] is excluded based on excludePaths.", path) return 0 for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(path): logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) return 0 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) return 0 self.append(path) logger.debug("Added directory to list: [%s]", path) return 1 def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False): """ Adds the contents of a directory to the list. The path must exist and must be a directory or a link to a directory. The contents of the directory (as well as the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its immediate contents to be added, then pass in C{recursive=False}. @note: If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list. @note: If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links I{within} the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc. @note: Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored. @note: The L{excludeDirs} flag only controls whether any given directory path itself is added to the list once it has been discovered. It does I{not} modify any behavior related to directory recursion. @note: If you call this method I{on a link to a directory} that link will never be dereferenced (it may, however, be followed). @param path: Directory path whose contents should be added to the list @type path: String representing a path on disk @param recursive: Indicates whether directory contents should be added recursively. @type recursive: Boolean value @param addSelf: Indicates whether the directory itself should be added to the list. @type addSelf: Boolean value @param linkDepth: Maximum depth of the tree at which soft links should be followed @type linkDepth: Integer value, where zero means not to follow any soft links @param dereference: Indicates whether soft links, if followed, should be dereferenced @type dereference: Boolean value @return: Number of items recursively added to the list @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) return self._addDirContentsInternal(path, addSelf, recursive, linkDepth, dereference) def _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False): """ Internal implementation of C{addDirContents}. This internal implementation exists due to some refactoring. Basically, some subclasses have a need to add the contents of a directory, but not the directory itself. This is different than the standard C{FilesystemList} behavior and actually ends up making a special case out of the first call in the recursive chain. Since I don't want to expose the modified interface, C{addDirContents} ends up being wholly implemented in terms of this method. The linkDepth parameter controls whether soft links are followed when we are adding the contents recursively. Any recursive calls reduce the value by one. If the value zero or less, then soft links will just be added as directories, but will not be followed. This means that links are followed to a I{constant depth} starting from the top-most directory. There is one difference between soft links and directories: soft links that are added recursively are not placed into the list explicitly. This is because if we do add the links recursively, the resulting tar file gets a little confused (it has a link and a directory with the same name). @note: If you call this method I{on a link to a directory} that link will never be dereferenced (it may, however, be followed). @param path: Directory path whose contents should be added to the list. @param includePath: Indicates whether to include the path as well as contents. @param recursive: Indicates whether directory contents should be added recursively. @param linkDepth: Depth of soft links that should be followed @param dereference: Indicates whether soft links, if followed, should be dereferenced @return: Number of items recursively added to the list @raise ValueError: If path is not a directory or does not exist. """ added = 0 if not os.path.exists(path) or not os.path.isdir(path): logger.debug("Path [%s] is not a directory or does not exist on disk.", path) raise ValueError("Path is not a directory or does not exist on disk.") if path in self.excludePaths: logger.debug("Path [%s] is excluded based on excludePaths.", path) return added for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(path): logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) return added for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList pattern = encodePath(pattern) # use same encoding as filenames if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) return added if self.ignoreFile is not None and os.path.exists(os.path.join(path, self.ignoreFile)): logger.debug("Path [%s] is excluded based on ignore file.", path) return added if includePath: added += self.addDir(path) # could actually be excluded by addDir, yet for entry in os.listdir(path): entrypath = os.path.join(path, entry) if os.path.isfile(entrypath): if linkDepth > 0 and dereference: derefpath = dereferenceLink(entrypath) if derefpath != entrypath: added += self.addFile(derefpath) added += self.addFile(entrypath) elif os.path.isdir(entrypath): if os.path.islink(entrypath): if recursive: if linkDepth > 0: newDepth = linkDepth - 1 if dereference: derefpath = dereferenceLink(entrypath) if derefpath != entrypath: added += self._addDirContentsInternal(derefpath, True, recursive, newDepth, dereference) added += self.addDir(entrypath) else: added += self._addDirContentsInternal(entrypath, False, recursive, newDepth, dereference) else: added += self.addDir(entrypath) else: added += self.addDir(entrypath) else: if recursive: newDepth = linkDepth - 1 added += self._addDirContentsInternal(entrypath, True, recursive, newDepth, dereference) else: added += self.addDir(entrypath) return added ################# # Remove methods ################# def removeFiles(self, pattern=None): """ Removes file entries from the list. If C{pattern} is not passed in or is C{None}, then all file entries will be removed from the list. Otherwise, only those file entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use L{removeInvalid} to purge those entries). This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all files, then you will be better off setting L{excludeFiles} to C{True} before adding items to the list. @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed @raise ValueError: If the passed-in pattern is not a valid regular expression. """ removed = 0 if pattern is None: for entry in self[:]: if os.path.exists(entry) and os.path.isfile(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 else: try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") for entry in self[:]: if os.path.exists(entry) and os.path.isfile(entry): if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed def removeDirs(self, pattern=None): """ Removes directory entries from the list. If C{pattern} is not passed in or is C{None}, then all directory entries will be removed from the list. Otherwise, only those directory entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use L{removeInvalid} to purge those entries). This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all directories, then you will be better off setting L{excludeDirs} to C{True} before adding items to the list (note that this will not prevent you from recursively adding the I{contents} of directories). @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed @raise ValueError: If the passed-in pattern is not a valid regular expression. """ removed = 0 if pattern is None: for entry in self[:]: if os.path.exists(entry) and os.path.isdir(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 else: try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") for entry in self[:]: if os.path.exists(entry) and os.path.isdir(entry): if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed def removeLinks(self, pattern=None): """ Removes soft link entries from the list. If C{pattern} is not passed in or is C{None}, then all soft link entries will be removed from the list. Otherwise, only those soft link entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use L{removeInvalid} to purge those entries). This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all soft links, then you will be better off setting L{excludeLinks} to C{True} before adding items to the list. @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed @raise ValueError: If the passed-in pattern is not a valid regular expression. """ removed = 0 if pattern is None: for entry in self[:]: if os.path.exists(entry) and os.path.islink(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 else: try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") for entry in self[:]: if os.path.exists(entry) and os.path.islink(entry): if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed def removeMatch(self, pattern): """ Removes from the list all entries matching a pattern. This method removes from the list all entries which match the passed in C{pattern}. Since there is no need to check the type of each entry, it is faster to call this method than to call the L{removeFiles}, L{removeDirs} or L{removeLinks} methods individually. If you know which patterns you will want to remove ahead of time, you may be better off setting L{excludePatterns} or L{excludeBasenamePatterns} before adding items to the list. @note: Unlike when using the exclude lists, the pattern here is I{not} bounded at the front and the back of the string. You can use any pattern you want. @param pattern: Regular expression pattern representing entries to remove @return: Number of entries removed. @raise ValueError: If the passed-in pattern is not a valid regular expression. """ try: pattern = encodePath(pattern) # use same encoding as filenames compiled = re.compile(pattern) except re.error: raise ValueError("Pattern is not a valid regular expression.") removed = 0 for entry in self[:]: if compiled.match(entry): self.remove(entry) logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed def removeInvalid(self): """ Removes from the list all entries that do not exist on disk. This method removes from the list all entries which do not currently exist on disk in some form. No attention is paid to whether the entries are files or directories. @return: Number of entries removed. """ removed = 0 for entry in self[:]: if not os.path.exists(entry): self.remove(entry) logger.debug("Removed path [%s] from list.", entry) removed += 1 logger.debug("Removed a total of %d entries.", removed) return removed ################## # Utility methods ################## def normalize(self): """Normalizes the list, ensuring that each entry is unique.""" orig = len(self) self.sort() dups = filter(lambda x, self=self: self[x] == self[x+1], range(0, len(self) - 1)) # pylint: disable=W0110 items = map(lambda x, self=self: self[x], dups) # pylint: disable=W0110 map(self.remove, items) new = len(self) logger.debug("Completed normalizing list; removed %d items (%d originally, %d now).", new-orig, orig, new) def verify(self): """ Verifies that all entries in the list exist on disk. @return: C{True} if all entries exist, C{False} otherwise. """ for entry in self: if not os.path.exists(entry): logger.debug("Path [%s] is invalid; list is not valid.", entry) return False logger.debug("All entries in list are valid.") return True ######################################################################## # SpanItem class definition ######################################################################## class SpanItem(object): # pylint: disable=R0903 """ Item returned by L{BackupFileList.generateSpan}. """ def __init__(self, fileList, size, capacity, utilization): """ Create object. @param fileList: List of files @param size: Size (in bytes) of files @param utilization: Utilization, as a percentage (0-100) """ self.fileList = fileList self.size = size self.capacity = capacity self.utilization = utilization ######################################################################## # BackupFileList class definition ######################################################################## class BackupFileList(FilesystemList): # pylint: disable=R0904 ###################### # Class documentation ###################### """ List of files to be backed up. A BackupFileList is a L{FilesystemList} containing a list of files to be backed up. It only contains files, not directories (soft links are treated like files). On top of the generic functionality provided by L{FilesystemList}, this class adds functionality to keep a hash (checksum) for each file in the list, and it also provides a method to calculate the total size of the files in the list and a way to export the list into tar form. @sort: __init__, addDir, totalSize, generateSizeMap, generateDigestMap, generateFitted, generateTarfile, removeUnchanged """ ############## # Constructor ############## def __init__(self): """Initializes a list with no configured exclusions.""" FilesystemList.__init__(self) ################################ # Overridden superclass methods ################################ def addDir(self, path): """ Adds a directory to the list. Note that this class does not allow directories to be added by themselves (a backup list contains only files). However, since links to directories are technically files, we allow them to be added. This method is implemented in terms of the superclass method, with one additional validation: the superclass method is only called if the passed-in path is both a directory and a link. All of the superclass's existing validations and restrictions apply. @param path: Directory path to be added to the list @type path: String representing a path on disk @return: Number of items added to the list. @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) if os.path.isdir(path) and not os.path.islink(path): return 0 else: return FilesystemList.addDir(self, path) ################## # Utility methods ################## def totalSize(self): """ Returns the total size among all files in the list. Only files are counted. Soft links that point at files are ignored. Entries which do not exist on disk are ignored. @return: Total size, in bytes """ total = 0.0 for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): total += float(os.stat(entry).st_size) return total def generateSizeMap(self): """ Generates a mapping from file to file size in bytes. The mapping does include soft links, which are listed with size zero. Entries which do not exist on disk are ignored. @return: Dictionary mapping file to file size """ table = { } for entry in self: if os.path.islink(entry): table[entry] = 0.0 elif os.path.isfile(entry): table[entry] = float(os.stat(entry).st_size) return table def generateDigestMap(self, stripPrefix=None): """ Generates a mapping from file to file digest. Currently, the digest is an SHA hash, which should be pretty secure. In the future, this might be a different kind of hash, but we guarantee that the type of the hash will not change unless the library major version number is bumped. Entries which do not exist on disk are ignored. Soft links are ignored. We would end up generating a digest for the file that the soft link points at, which doesn't make any sense. If C{stripPrefix} is passed in, then that prefix will be stripped from each key when the map is generated. This can be useful in generating two "relative" digest maps to be compared to one another. @param stripPrefix: Common prefix to be stripped from paths @type stripPrefix: String with any contents @return: Dictionary mapping file to digest value @see: L{removeUnchanged} """ table = { } if stripPrefix is not None: for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): table[entry.replace(stripPrefix, "", 1)] = BackupFileList._generateDigest(entry) else: for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): table[entry] = BackupFileList._generateDigest(entry) return table @staticmethod def _generateDigest(path): """ Generates an SHA digest for a given file on disk. The original code for this function used this simplistic implementation, which requires reading the entire file into memory at once in order to generate a digest value:: sha.new(open(path).read()).hexdigest() Not surprisingly, this isn't an optimal solution. The U{Simple file hashing } Python Cookbook recipe describes how to incrementally generate a hash value by reading in chunks of data rather than reading the file all at once. The recipe relies on the the C{update()} method of the various Python hashing algorithms. In my tests using a 110 MB file on CD, the original implementation requires 111 seconds. This implementation requires only 40-45 seconds, which is a pretty substantial speed-up. Experience shows that reading in around 4kB (4096 bytes) at a time yields the best performance. Smaller reads are quite a bit slower, and larger reads don't make much of a difference. The 4kB number makes me a little suspicious, and I think it might be related to the size of a filesystem read at the hardware level. However, I've decided to just hardcode 4096 until I have evidence that shows it's worthwhile making the read size configurable. @param path: Path to generate digest for. @return: ASCII-safe SHA digest for the file. @raise OSError: If the file cannot be opened. """ # pylint: disable=C0103,E1101 try: import hashlib s = hashlib.sha1() except ImportError: import sha s = sha.new() f = open(path, mode="rb") # in case platform cares about binary reads readBytes = 4096 # see notes above while readBytes > 0: readString = f.read(readBytes) s.update(readString) readBytes = len(readString) f.close() digest = s.hexdigest() logger.debug("Generated digest [%s] for file [%s].", digest, path) return digest def generateFitted(self, capacity, algorithm="worst_fit"): """ Generates a list of items that fit in the indicated capacity. Sometimes, callers would like to include every item in a list, but are unable to because not all of the items fit in the space available. This method returns a copy of the list, containing only the items that fit in a given capacity. A copy is returned so that we don't lose any information if for some reason the fitted list is unsatisfactory. The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit. @param capacity: Maximum capacity among the files in the new list @type capacity: Integer, in bytes @param algorithm: Knapsack (fit) algorithm to use @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" @return: Copy of list with total size no larger than indicated capacity @raise ValueError: If the algorithm is invalid. """ table = self._getKnapsackTable() function = BackupFileList._getKnapsackFunction(algorithm) return function(table, capacity)[0] def generateSpan(self, capacity, algorithm="worst_fit"): """ Splits the list of items into sub-lists that fit in a given capacity. Sometimes, callers need split to a backup file list into a set of smaller lists. For instance, you could use this to "span" the files across a set of discs. The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit. @note: If any of your items are larger than the capacity, then it won't be possible to find a solution. In this case, a value error will be raised. @param capacity: Maximum capacity among the files in the new list @type capacity: Integer, in bytes @param algorithm: Knapsack (fit) algorithm to use @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" @return: List of L{SpanItem} objects. @raise ValueError: If the algorithm is invalid. @raise ValueError: If it's not possible to fit some items """ spanItems = [] function = BackupFileList._getKnapsackFunction(algorithm) table = self._getKnapsackTable(capacity) iteration = 0 while len(table) > 0: iteration += 1 fit = function(table, capacity) if len(fit[0]) == 0: # Should never happen due to validations in _convertToKnapsackForm(), but let's be safe raise ValueError("After iteration %d, unable to add any new items." % iteration) removeKeys(table, fit[0]) utilization = (float(fit[1])/float(capacity))*100.0 item = SpanItem(fit[0], fit[1], capacity, utilization) spanItems.append(item) return spanItems def _getKnapsackTable(self, capacity=None): """ Converts the list into the form needed by the knapsack algorithms. @return: Dictionary mapping file name to tuple of (file path, file size). """ table = { } for entry in self: if os.path.islink(entry): table[entry] = (entry, 0.0) elif os.path.isfile(entry): size = float(os.stat(entry).st_size) if capacity is not None: if size > capacity: raise ValueError("File [%s] cannot fit in capacity %s." % (entry, displayBytes(capacity))) table[entry] = (entry, size) return table @staticmethod def _getKnapsackFunction(algorithm): """ Returns a reference to the function associated with an algorithm name. Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit" @param algorithm: Name of the algorithm @return: Reference to knapsack function @raise ValueError: If the algorithm name is unknown. """ if algorithm == "first_fit": return firstFit elif algorithm == "best_fit": return bestFit elif algorithm == "worst_fit": return worstFit elif algorithm == "alternate_fit": return alternateFit else: raise ValueError("Algorithm [%s] is invalid." % algorithm) def generateTarfile(self, path, mode='tar', ignore=False, flat=False): """ Creates a tar file containing the files in the list. By default, this method will create uncompressed tar files. If you pass in mode C{'targz'}, then it will create gzipped tar files, and if you pass in mode C{'tarbz2'}, then it will create bzipped tar files. The tar file will be created as a GNU tar archive, which enables extended file name lengths, etc. Since GNU tar is so prevalent, I've decided that the extra functionality out-weighs the disadvantage of not being "standard". If you pass in C{flat=True}, then a "flat" archive will be created, and all of the files will be added to the root of the archive. So, the file C{/tmp/something/whatever.txt} would be added as just C{whatever.txt}. By default, the whole method call fails if there are problems adding any of the files to the archive, resulting in an exception. Under these circumstances, callers are advised that they might want to call L{removeInvalid()} and then attempt to extract the tar file a second time, since the most common cause of failures is a missing file (a file that existed when the list was built, but is gone again by the time the tar file is built). If you want to, you can pass in C{ignore=True}, and the method will ignore errors encountered when adding individual files to the archive (but not errors opening and closing the archive itself). We'll always attempt to remove the tarfile from disk if an exception will be thrown. @note: No validation is done as to whether the entries in the list are files, since only files or soft links should be in an object like this. However, to be safe, everything is explicitly added to the tar archive non-recursively so it's safe to include soft links to directories. @note: The Python C{tarfile} module, which is used internally here, is supposed to deal properly with long filenames and links. In my testing, I have found that it appears to be able to add long really long filenames to archives, but doesn't do a good job reading them back out, even out of an archive it created. Fortunately, all Cedar Backup does is add files to archives. @param path: Path of tar file to create on disk @type path: String representing a path on disk @param mode: Tar creation mode @type mode: One of either C{'tar'}, C{'targz'} or C{'tarbz2'} @param ignore: Indicates whether to ignore certain errors. @type ignore: Boolean @param flat: Creates "flat" archive by putting all items in root @type flat: Boolean @raise ValueError: If mode is not valid @raise ValueError: If list is empty @raise ValueError: If the path could not be encoded properly. @raise TarError: If there is a problem creating the tar file """ # pylint: disable=E1101 path = encodePath(path) if len(self) == 0: raise ValueError("Empty list cannot be used to generate tarfile.") if mode == 'tar': tarmode = "w:" elif mode == 'targz': tarmode = "w:gz" elif mode == 'tarbz2': tarmode = "w:bz2" else: raise ValueError("Mode [%s] is not valid." % mode) try: tar = tarfile.open(path, tarmode) try: tar.format = tarfile.GNU_FORMAT except AttributeError: tar.posix = False for entry in self: try: if flat: tar.add(entry, arcname=os.path.basename(entry), recursive=False) else: tar.add(entry, recursive=False) except tarfile.TarError, e: if not ignore: raise e logger.info("Unable to add file [%s]; going on anyway.", entry) except OSError, e: if not ignore: raise tarfile.TarError(e) logger.info("Unable to add file [%s]; going on anyway.", entry) tar.close() except tarfile.ReadError, e: try: tar.close() except: pass if os.path.exists(path): try: os.remove(path) except: pass raise tarfile.ReadError("Unable to open [%s]; maybe directory doesn't exist?" % path) except tarfile.TarError, e: try: tar.close() except: pass if os.path.exists(path): try: os.remove(path) except: pass raise e def removeUnchanged(self, digestMap, captureDigest=False): """ Removes unchanged entries from the list. This method relies on a digest map as returned from L{generateDigestMap}. For each entry in C{digestMap}, if the entry also exists in the current list I{and} the entry in the current list has the same digest value as in the map, the entry in the current list will be removed. This method offers a convenient way for callers to filter unneeded entries from a list. The idea is that a caller will capture a digest map from C{generateDigestMap} at some point in time (perhaps the beginning of the week), and will save off that map using C{pickle} or some other method. Then, the caller could use this method sometime in the future to filter out any unchanged files based on the saved-off map. If C{captureDigest} is passed-in as C{True}, then digest information will be captured for the entire list before the removal step occurs using the same rules as in L{generateDigestMap}. The check will involve a lookup into the complete digest map. If C{captureDigest} is passed in as C{False}, we will only generate a digest value for files we actually need to check, and we'll ignore any entry in the list which isn't a file that currently exists on disk. The return value varies depending on C{captureDigest}, as well. To preserve backwards compatibility, if C{captureDigest} is C{False}, then we'll just return a single value representing the number of entries removed. Otherwise, we'll return a tuple of C{(entries removed, digest map)}. The returned digest map will be in exactly the form returned by L{generateDigestMap}. @note: For performance reasons, this method actually ends up rebuilding the list from scratch. First, we build a temporary dictionary containing all of the items from the original list. Then, we remove items as needed from the dictionary (which is faster than the equivalent operation on a list). Finally, we replace the contents of the current list based on the keys left in the dictionary. This should be transparent to the caller. @param digestMap: Dictionary mapping file name to digest value. @type digestMap: Map as returned from L{generateDigestMap}. @param captureDigest: Indicates that digest information should be captured. @type captureDigest: Boolean @return: Results as discussed above (format varies based on arguments) """ if captureDigest: removed = 0 table = {} captured = {} for entry in self: if os.path.isfile(entry) and not os.path.islink(entry): table[entry] = BackupFileList._generateDigest(entry) captured[entry] = table[entry] else: table[entry] = None for entry in digestMap.keys(): if table.has_key(entry): if table[entry] is not None: # equivalent to file/link check in other case digest = table[entry] if digest == digestMap[entry]: removed += 1 del table[entry] logger.debug("Discarded unchanged file [%s].", entry) self[:] = table.keys() return (removed, captured) else: removed = 0 table = {} for entry in self: table[entry] = None for entry in digestMap.keys(): if table.has_key(entry): if os.path.isfile(entry) and not os.path.islink(entry): digest = BackupFileList._generateDigest(entry) if digest == digestMap[entry]: removed += 1 del table[entry] logger.debug("Discarded unchanged file [%s].", entry) self[:] = table.keys() return removed ######################################################################## # PurgeItemList class definition ######################################################################## class PurgeItemList(FilesystemList): # pylint: disable=R0904 ###################### # Class documentation ###################### """ List of files and directories to be purged. A PurgeItemList is a L{FilesystemList} containing a list of files and directories to be purged. On top of the generic functionality provided by L{FilesystemList}, this class adds functionality to remove items that are too young to be purged, and to actually remove each item in the list from the filesystem. The other main difference is that when you add a directory's contents to a purge item list, the directory itself is not added to the list. This way, if someone asks to purge within in C{/opt/backup/collect}, that directory doesn't get removed once all of the files within it is gone. """ ############## # Constructor ############## def __init__(self): """Initializes a list with no configured exclusions.""" FilesystemList.__init__(self) ############## # Add methods ############## def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False): """ Adds the contents of a directory to the list. The path must exist and must be a directory or a link to a directory. The contents of the directory (but I{not} the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its contents to be added, then pass in C{recursive=False}. @note: If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list. @note: If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links I{within} the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc. @note: Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored. @note: The L{excludeDirs} flag only controls whether any given soft link path itself is added to the list once it has been discovered. It does I{not} modify any behavior related to directory recursion. @note: The L{excludeDirs} flag only controls whether any given directory path itself is added to the list once it has been discovered. It does I{not} modify any behavior related to directory recursion. @note: If you call this method I{on a link to a directory} that link will never be dereferenced (it may, however, be followed). @param path: Directory path whose contents should be added to the list @type path: String representing a path on disk @param recursive: Indicates whether directory contents should be added recursively. @type recursive: Boolean value @param addSelf: Ignored in this subclass. @param linkDepth: Depth of soft links that should be followed @type linkDepth: Integer value, where zero means not to follow any soft links @param dereference: Indicates whether soft links, if followed, should be dereferenced @type dereference: Boolean value @return: Number of items recursively added to the list @raise ValueError: If path is not a directory or does not exist. @raise ValueError: If the path could not be encoded properly. """ path = encodePath(path) path = normalizeDir(path) return super(PurgeItemList, self)._addDirContentsInternal(path, False, recursive, linkDepth, dereference) ################## # Utility methods ################## def removeYoungFiles(self, daysOld): """ Removes from the list files younger than a certain age (in days). Any file whose "age" in days is less than (C{<}) the value of the C{daysOld} parameter will be removed from the list so that it will not be purged later when L{purgeItems} is called. Directories and soft links will be ignored. The "age" of a file is the amount of time since the file was last used, per the most recent of the file's C{st_atime} and C{st_mtime} values. @note: Some people find the "sense" of this method confusing or "backwards". Keep in mind that this method is used to remove items I{from the list}, not from the filesystem! It removes from the list those items that you would I{not} want to purge because they are too young. As an example, passing in C{daysOld} of zero (0) would remove from the list no files, which would result in purging all of the files later. I would be happy to make a synonym of this method with an easier-to-understand "sense", if someone can suggest one. @param daysOld: Minimum age of files that are to be kept in the list. @type daysOld: Integer value >= 0. @return: Number of entries removed """ removed = 0 daysOld = int(daysOld) if daysOld < 0: raise ValueError("Days old value must be an integer >= 0.") for entry in self[:]: if os.path.isfile(entry) and not os.path.islink(entry): try: ageInDays = calculateFileAge(entry) ageInWholeDays = math.floor(ageInDays) if ageInWholeDays < 0: ageInWholeDays = 0 if ageInWholeDays < daysOld: removed += 1 self.remove(entry) except OSError: pass return removed def purgeItems(self): """ Purges all items in the list. Every item in the list will be purged. Directories in the list will I{not} be purged recursively, and hence will only be removed if they are empty. Errors will be ignored. To faciliate easy removal of directories that will end up being empty, the delete process happens in two passes: files first (including soft links), then directories. @return: Tuple containing count of (files, dirs) removed """ files = 0 dirs = 0 for entry in self: if os.path.exists(entry) and (os.path.isfile(entry) or os.path.islink(entry)): try: os.remove(entry) files += 1 logger.debug("Purged file [%s].", entry) except OSError: pass for entry in self: if os.path.exists(entry) and os.path.isdir(entry) and not os.path.islink(entry): try: os.rmdir(entry) dirs += 1 logger.debug("Purged empty directory [%s].", entry) except OSError: pass return (files, dirs) ######################################################################## # Public functions ######################################################################## ########################## # normalizeDir() function ########################## def normalizeDir(path): """ Normalizes a directory name. For our purposes, a directory name is normalized by removing the trailing path separator, if any. This is important because we want directories to appear within lists in a consistent way, although from the user's perspective passing in C{/path/to/dir/} and C{/path/to/dir} are equivalent. @param path: Path to be normalized. @type path: String representing a path on disk @return: Normalized path, which should be equivalent to the original. """ if path != os.sep and path[-1:] == os.sep: return path[:-1] return path ############################# # compareContents() function ############################# def compareContents(path1, path2, verbose=False): """ Compares the contents of two directories to see if they are equivalent. The two directories are recursively compared. First, we check whether they contain exactly the same set of files. Then, we check to see every given file has exactly the same contents in both directories. This is all relatively simple to implement through the magic of L{BackupFileList.generateDigestMap}, which knows how to strip a path prefix off the front of each entry in the mapping it generates. This makes our comparison as simple as creating a list for each path, then generating a digest map for each path and comparing the two. If no exception is thrown, the two directories are considered identical. If the C{verbose} flag is C{True}, then an alternate (but slower) method is used so that any thrown exception can indicate exactly which file caused the comparison to fail. The thrown C{ValueError} exception distinguishes between the directories containing different files, and containing the same files with differing content. @note: Symlinks are I{not} followed for the purposes of this comparison. @param path1: First path to compare. @type path1: String representing a path on disk @param path2: First path to compare. @type path2: String representing a path on disk @param verbose: Indicates whether a verbose response should be given. @type verbose: Boolean @raise ValueError: If a directory doesn't exist or can't be read. @raise ValueError: If the two directories are not equivalent. @raise IOError: If there is an unusual problem reading the directories. """ try: path1List = BackupFileList() path1List.addDirContents(path1) path1Digest = path1List.generateDigestMap(stripPrefix=normalizeDir(path1)) path2List = BackupFileList() path2List.addDirContents(path2) path2Digest = path2List.generateDigestMap(stripPrefix=normalizeDir(path2)) compareDigestMaps(path1Digest, path2Digest, verbose) except IOError, e: logger.error("I/O error encountered during consistency check.") raise e def compareDigestMaps(digest1, digest2, verbose=False): """ Compares two digest maps and throws an exception if they differ. @param digest1: First digest to compare. @type digest1: Digest as returned from BackupFileList.generateDigestMap() @param digest2: Second digest to compare. @type digest2: Digest as returned from BackupFileList.generateDigestMap() @param verbose: Indicates whether a verbose response should be given. @type verbose: Boolean @raise ValueError: If the two directories are not equivalent. """ if not verbose: if digest1 != digest2: raise ValueError("Consistency check failed.") else: list1 = UnorderedList(digest1.keys()) list2 = UnorderedList(digest2.keys()) if list1 != list2: raise ValueError("Directories contain a different set of files.") for key in list1: if digest1[key] != digest2[key]: raise ValueError("File contents for [%s] vary between directories." % key) CedarBackup2-2.26.5/CedarBackup2/image.py0000664000175000017500000000247512560016766021461 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides interface backwards compatibility. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides interface backwards compatibility. In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## from CedarBackup2.writers.util import IsoImage # pylint: disable=W0611 CedarBackup2-2.26.5/CedarBackup2/testutil.py0000664000175000017500000004363412560016766022256 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2006,2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides unit-testing utilities. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides unit-testing utilities. These utilities are kept here, separate from util.py, because they provide common functionality that I do not want exported "publicly" once Cedar Backup is installed on a system. They are only used for unit testing, and are only useful within the source tree. Many of these functions are in here because they are "good enough" for unit test work but are not robust enough to be real public functions. Others (like L{removedir}) do what they are supposed to, but I don't want responsibility for making them available to others. @sort: findResources, commandAvailable, buildPath, removedir, extractTar, changeFileAge, getMaskAsMode, getLogin, failUnlessAssignRaises, runningAsRoot, platformDebian, platformMacOsX, platformCygwin, platformWindows, platformHasEcho, platformSupportsLinks, platformSupportsPermissions, platformRequiresBinaryRead @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## import sys import os import tarfile import time import getpass import random import string # pylint: disable=W0402 import platform import logging from StringIO import StringIO from CedarBackup2.util import encodePath, executeCommand from CedarBackup2.config import Config, OptionsConfig from CedarBackup2.customize import customizeOverrides from CedarBackup2.cli import setupPathResolver ######################################################################## # Public functions ######################################################################## ############################## # setupDebugLogger() function ############################## def setupDebugLogger(): """ Sets up a screen logger for debugging purposes. Normally, the CLI functionality configures the logger so that things get written to the right place. However, for debugging it's sometimes nice to just get everything -- debug information and output -- dumped to the screen. This function takes care of that. """ logger = logging.getLogger("CedarBackup2") logger.setLevel(logging.DEBUG) # let the logger see all messages formatter = logging.Formatter(fmt="%(message)s") handler = logging.StreamHandler(stream=sys.stdout) handler.setFormatter(formatter) handler.setLevel(logging.DEBUG) logger.addHandler(handler) ################# # setupOverrides ################# def setupOverrides(): """ Set up any platform-specific overrides that might be required. When packages are built, this is done manually (hardcoded) in customize.py and the overrides are set up in cli.cli(). This way, no runtime checks need to be done. This is safe, because the package maintainer knows exactly which platform (Debian or not) the package is being built for. Unit tests are different, because they might be run anywhere. So, we attempt to make a guess about plaform using platformDebian(), and use that to set up the custom overrides so that platform-specific unit tests continue to work. """ config = Config() config.options = OptionsConfig() if platformDebian(): customizeOverrides(config, platform="debian") else: customizeOverrides(config, platform="standard") setupPathResolver(config) ########################### # findResources() function ########################### def findResources(resources, dataDirs): """ Returns a dictionary of locations for various resources. @param resources: List of required resources. @param dataDirs: List of data directories to search within for resources. @return: Dictionary mapping resource name to resource path. @raise Exception: If some resource cannot be found. """ mapping = { } for resource in resources: for resourceDir in dataDirs: path = os.path.join(resourceDir, resource) if os.path.exists(path): mapping[resource] = path break else: raise Exception("Unable to find resource [%s]." % resource) return mapping ############################## # commandAvailable() function ############################## def commandAvailable(command): """ Indicates whether a command is available on $PATH somewhere. This should work on both Windows and UNIX platforms. @param command: Commang to search for @return: Boolean true/false depending on whether command is available. """ if os.environ.has_key("PATH"): for path in os.environ["PATH"].split(os.sep): if os.path.exists(os.path.join(path, command)): return True return False ####################### # buildPath() function ####################### def buildPath(components): """ Builds a complete path from a list of components. For instance, constructs C{"/a/b/c"} from C{["/a", "b", "c",]}. @param components: List of components. @returns: String path constructed from components. @raise ValueError: If a path cannot be encoded properly. """ path = components[0] for component in components[1:]: path = os.path.join(path, component) return encodePath(path) ####################### # removedir() function ####################### def removedir(tree): """ Recursively removes an entire directory. This is basically taken from an example on python.com. @param tree: Directory tree to remove. @raise ValueError: If a path cannot be encoded properly. """ tree = encodePath(tree) for root, dirs, files in os.walk(tree, topdown=False): for name in files: path = os.path.join(root, name) if os.path.islink(path): os.remove(path) elif os.path.isfile(path): os.remove(path) for name in dirs: path = os.path.join(root, name) if os.path.islink(path): os.remove(path) elif os.path.isdir(path): os.rmdir(path) os.rmdir(tree) ######################## # extractTar() function ######################## def extractTar(tmpdir, filepath): """ Extracts the indicated tar file to the indicated tmpdir. @param tmpdir: Temp directory to extract to. @param filepath: Path to tarfile to extract. @raise ValueError: If a path cannot be encoded properly. """ # pylint: disable=E1101 tmpdir = encodePath(tmpdir) filepath = encodePath(filepath) tar = tarfile.open(filepath) try: tar.format = tarfile.GNU_FORMAT except AttributeError: tar.posix = False for tarinfo in tar: tar.extract(tarinfo, tmpdir) ########################### # changeFileAge() function ########################### def changeFileAge(filename, subtract=None): """ Changes a file age using the C{os.utime} function. @note: Some platforms don't seem to be able to set an age precisely. As a result, whereas we might have intended to set an age of 86400 seconds, we actually get an age of 86399.375 seconds. When util.calculateFileAge() looks at that the file, it calculates an age of 0.999992766204 days, which then gets truncated down to zero whole days. The tests get very confused. To work around this, I always subtract off one additional second as a fudge factor. That way, the file age will be I{at least} as old as requested later on. @param filename: File to operate on. @param subtract: Number of seconds to subtract from the current time. @raise ValueError: If a path cannot be encoded properly. """ filename = encodePath(filename) newTime = time.time() - 1 if subtract is not None: newTime -= subtract os.utime(filename, (newTime, newTime)) ########################### # getMaskAsMode() function ########################### def getMaskAsMode(): """ Returns the user's current umask inverted to a mode. A mode is mostly a bitwise inversion of a mask, i.e. mask 002 is mode 775. @return: Umask converted to a mode, as an integer. """ umask = os.umask(0777) os.umask(umask) return int(~umask & 0777) # invert, then use only lower bytes ###################### # getLogin() function ###################### def getLogin(): """ Returns the name of the currently-logged in user. This might fail under some circumstances - but if it does, our tests would fail anyway. """ return getpass.getuser() ############################ # randomFilename() function ############################ def randomFilename(length, prefix=None, suffix=None): """ Generates a random filename with the given length. @param length: Length of filename. @return Random filename. """ characters = [None] * length for i in xrange(length): characters[i] = random.choice(string.ascii_uppercase) if prefix is None: prefix = "" if suffix is None: suffix = "" return "%s%s%s" % (prefix, "".join(characters), suffix) #################################### # failUnlessAssignRaises() function #################################### def failUnlessAssignRaises(testCase, exception, obj, prop, value): """ Equivalent of C{failUnlessRaises}, but used for property assignments instead. It's nice to be able to use C{failUnlessRaises} to check that a method call raises the exception that you expect. Unfortunately, this method can't be used to check Python propery assignments, even though these property assignments are actually implemented underneath as methods. This function (which can be easily called by unit test classes) provides an easy way to wrap the assignment checks. It's not pretty, or as intuitive as the original check it's modeled on, but it does work. Let's assume you make this method call:: testCase.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", absolutePath) If you do this, a test case failure will be raised unless the assignment:: collectDir.absolutePath = absolutePath fails with a C{ValueError} exception. The failure message differentiates between the case where no exception was raised and the case where the wrong exception was raised. @note: Internally, the C{missed} and C{instead} variables are used rather than directly calling C{testCase.fail} upon noticing a problem because the act of "failure" itself generates an exception that would be caught by the general C{except} clause. @param testCase: PyUnit test case object (i.e. self). @param exception: Exception that is expected to be raised. @param obj: Object whose property is to be assigned to. @param prop: Name of the property, as a string. @param value: Value that is to be assigned to the property. @see: C{unittest.TestCase.failUnlessRaises} """ missed = False instead = None try: exec "obj.%s = value" % prop # pylint: disable=W0122 missed = True except exception: pass except Exception, e: instead = e if missed: testCase.fail("Expected assignment to raise %s, but got no exception." % (exception.__name__)) if instead is not None: testCase.fail("Expected assignment to raise %s, but got %s instead." % (ValueError, instead.__class__.__name__)) ########################### # captureOutput() function ########################### def captureOutput(c): """ Captures the output (stdout, stderr) of a function or a method. Some of our functions don't do anything other than just print output. We need a way to test these functions (at least nominally) but we don't want any of the output spoiling the test suite output. This function just creates a dummy file descriptor that can be used as a target by the callable function, rather than C{stdout} or C{stderr}. @note: This method assumes that C{callable} doesn't take any arguments besides keyword argument C{fd} to specify the file descriptor. @param c: Callable function or method. @return: Output of function, as one big string. """ fd = StringIO() c(fd=fd) result = fd.getvalue() fd.close() return result ######################### # _isPlatform() function ######################### def _isPlatform(name): """ Returns boolean indicating whether we're running on the indicated platform. @param name: Platform name to check, currently one of "windows" or "macosx" """ if name == "windows": return platform.platform(True, True).startswith("Windows") elif name == "macosx": return sys.platform == "darwin" elif name == "debian": return platform.platform(False, False).find("debian") > 0 elif name == "cygwin": return platform.platform(True, True).startswith("CYGWIN") else: raise ValueError("Unknown platform [%s]." % name) ############################ # platformDebian() function ############################ def platformDebian(): """ Returns boolean indicating whether this is the Debian platform. """ return _isPlatform("debian") ############################ # platformMacOsX() function ############################ def platformMacOsX(): """ Returns boolean indicating whether this is the Mac OS X platform. """ return _isPlatform("macosx") ############################# # platformWindows() function ############################# def platformWindows(): """ Returns boolean indicating whether this is the Windows platform. """ return _isPlatform("windows") ############################ # platformCygwin() function ############################ def platformCygwin(): """ Returns boolean indicating whether this is the Cygwin platform. """ return _isPlatform("cygwin") ################################### # platformSupportsLinks() function ################################### def platformSupportsLinks(): """ Returns boolean indicating whether the platform supports soft-links. Some platforms, like Windows, do not support links, and tests need to take this into account. """ return not platformWindows() ######################################### # platformSupportsPermissions() function ######################################### def platformSupportsPermissions(): """ Returns boolean indicating whether the platform supports UNIX-style file permissions. Some platforms, like Windows, do not support permissions, and tests need to take this into account. """ return not platformWindows() ######################################## # platformRequiresBinaryRead() function ######################################## def platformRequiresBinaryRead(): """ Returns boolean indicating whether the platform requires binary reads. Some platforms, like Windows, require a special flag to read binary data from files. """ return platformWindows() ############################# # platformHasEcho() function ############################# def platformHasEcho(): """ Returns boolean indicating whether the platform has a sensible echo command. On some platforms, like Windows, echo doesn't really work for tests. """ return not platformWindows() ########################### # runningAsRoot() function ########################### def runningAsRoot(): """ Returns boolean indicating whether the effective user id is root. This is always true on platforms that have no concept of root, like Windows. """ if platformWindows(): return True else: return os.geteuid() == 0 ############################## # availableLocales() function ############################## def availableLocales(): """ Returns a list of available locales on the system @return: List of string locale names """ locales = [] output = executeCommand(["locale"], [ "-a", ], returnOutput=True, ignoreStderr=True)[1] for line in output: locales.append(line.rstrip()) return locales #################################### # hexFloatLiteralAllowed() function #################################### def hexFloatLiteralAllowed(): """ Indicates whether hex float literals are allowed by the interpreter. As far back as 2004, some Python documentation indicated that octal and hex notation applied only to integer literals. However, prior to Python 2.5, it was legal to construct a float with an argument like 0xAC on some platforms. This check provides a an indication of whether the current interpreter supports that behavior. This check exists so that unit tests can continue to test the same thing as always for pre-2.5 interpreters (i.e. making sure backwards compatibility doesn't break) while still continuing to work for later interpreters. The returned value is True if hex float literals are allowed, False otherwise. """ if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 5] and not platformWindows(): return True return False CedarBackup2-2.26.5/CedarBackup2/tools/0002775000175000017500000000000012642035650021151 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/CedarBackup2/tools/span.py0000775000175000017500000006143212562435264022501 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Spans staged data among multiple discs # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Spans staged data among multiple discs This is the Cedar Backup span tool. It is intended for use by people who stage more data than can fit on a single disc. It allows a user to split staged data among more than one disc. It can't be an extension because it requires user input when switching media. Most configuration is taken from the Cedar Backup configuration file, specifically the store section. A few pieces of configuration are taken directly from the user. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules and constants ######################################################################## # System modules import sys import os import logging import tempfile # Cedar Backup modules from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT from CedarBackup2.util import displayBytes, convertSize, mount, unmount from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES from CedarBackup2.config import Config from CedarBackup2.filesystem import BackupFileList, compareDigestMaps, normalizeDir from CedarBackup2.cli import Options, setupLogging, setupPathResolver from CedarBackup2.cli import DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE from CedarBackup2.actions.constants import STORE_INDICATOR from CedarBackup2.actions.util import createWriter from CedarBackup2.actions.store import writeIndicatorFile from CedarBackup2.actions.util import findDailyDirs from CedarBackup2.util import Diagnostics ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.tools.span") ####################################################################### # SpanOptions class ####################################################################### class SpanOptions(Options): """ Tool-specific command-line options. Most of the cback command-line options are exactly what we need here -- logfile path, permissions, verbosity, etc. However, we need to make a few tweaks since we don't accept any actions. Also, a few extra command line options that we accept are really ignored underneath. I just don't care about that for a tool like this. """ def validate(self): """ Validates command-line options represented by the object. There are no validations here, because we don't use any actions. @raise ValueError: If one of the validations fails. """ pass ####################################################################### # Public functions ####################################################################### ################# # cli() function ################# def cli(): """ Implements the command-line interface for the C{cback-span} script. Essentially, this is the "main routine" for the cback-span script. It does all of the argument processing for the script, and then also implements the tool functionality. This function looks pretty similiar to C{CedarBackup2.cli.cli()}. It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication. A different error code is returned for each type of failure: - C{1}: The Python interpreter version is < 2.7 - C{2}: Error processing command-line arguments - C{3}: Error configuring logging - C{4}: Error parsing indicated configuration file - C{5}: Backup was interrupted with a CTRL-C or similar - C{6}: Error executing other parts of the script @note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively. @return: Error code as described above. """ try: if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 7]: sys.stderr.write("Python 2 version 2.7 or greater required.\n") return 1 except: # sys.version_info isn't available before 2.0 sys.stderr.write("Python 2 version 2.7 or greater required.\n") return 1 try: options = SpanOptions(argumentList=sys.argv[1:]) except Exception, e: _usage() sys.stderr.write(" *** Error: %s\n" % e) return 2 if options.help: _usage() return 0 if options.version: _version() return 0 if options.diagnostics: _diagnostics() return 0 if options.stacktrace: logfile = setupLogging(options) else: try: logfile = setupLogging(options) except Exception as e: sys.stderr.write("Error setting up logging: %s\n" % e) return 3 logger.info("Cedar Backup 'span' utility run started.") logger.info("Options were [%s]", options) logger.info("Logfile is [%s]", logfile) if options.config is None: logger.debug("Using default configuration file.") configPath = DEFAULT_CONFIG else: logger.debug("Using user-supplied configuration file.") configPath = options.config try: logger.info("Configuration path is [%s]", configPath) config = Config(xmlPath=configPath) setupPathResolver(config) except Exception, e: logger.error("Error reading or handling configuration: %s", e) logger.info("Cedar Backup 'span' utility run completed with status 4.") return 4 if options.stacktrace: _executeAction(options, config) else: try: _executeAction(options, config) except KeyboardInterrupt: logger.error("Backup interrupted.") logger.info("Cedar Backup 'span' utility run completed with status 5.") return 5 except Exception, e: logger.error("Error executing backup: %s", e) logger.info("Cedar Backup 'span' utility run completed with status 6.") return 6 logger.info("Cedar Backup 'span' utility run completed with status 0.") return 0 ####################################################################### # Utility functions ####################################################################### #################### # _usage() function #################### def _usage(fd=sys.stderr): """ Prints usage information for the cback script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Usage: cback-span [switches]\n") fd.write("\n") fd.write(" Cedar Backup 'span' tool.\n") fd.write("\n") fd.write(" This Cedar Backup utility spans staged data between multiple discs.\n") fd.write(" It is a utility, not an extension, and requires user interaction.\n") fd.write("\n") fd.write(" The following switches are accepted, mostly to set up underlying\n") fd.write(" Cedar Backup functionality:\n") fd.write("\n") fd.write(" -h, --help Display this usage/help listing\n") fd.write(" -V, --version Display version information\n") fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) fd.write(" -O, --output Record some sub-command (i.e. tar) output to the log\n") fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") fd.write("\n") ###################### # _version() function ###################### def _version(fd=sys.stdout): """ Prints version information for the cback script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Cedar Backup 'span' tool.\n") fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) fd.write("\n") fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) fd.write(" See CREDITS for a list of included code and other contributors.\n") fd.write(" This is free software; there is NO warranty. See the\n") fd.write(" GNU General Public License version 2 for copying conditions.\n") fd.write("\n") fd.write(" Use the --help option for usage information.\n") fd.write("\n") ########################## # _diagnostics() function ########################## def _diagnostics(fd=sys.stdout): """ Prints runtime diagnostics information. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write("Diagnostics:\n") fd.write("\n") Diagnostics().printDiagnostics(fd=fd, prefix=" ") fd.write("\n") ############################ # _executeAction() function ############################ def _executeAction(options, config): """ Implements the guts of the cback-span tool. @param options: Program command-line options. @type options: SpanOptions object. @param config: Program configuration. @type config: Config object. @raise Exception: Under many generic error conditions """ print "" print "================================================" print " Cedar Backup 'span' tool" print "================================================" print "" print "This the Cedar Backup span tool. It is used to split up staging" print "data when that staging data does not fit onto a single disc." print "" print "This utility operates using Cedar Backup configuration. Configuration" print "specifies which staging directory to look at and which writer device" print "and media type to use." print "" if not _getYesNoAnswer("Continue?", default="Y"): return print "===" print "" print "Cedar Backup store configuration looks like this:" print "" print " Source Directory...: %s" % config.store.sourceDir print " Media Type.........: %s" % config.store.mediaType print " Device Type........: %s" % config.store.deviceType print " Device Path........: %s" % config.store.devicePath print " Device SCSI ID.....: %s" % config.store.deviceScsiId print " Drive Speed........: %s" % config.store.driveSpeed print " Check Data Flag....: %s" % config.store.checkData print " No Eject Flag......: %s" % config.store.noEject print "" if not _getYesNoAnswer("Is this OK?", default="Y"): return print "===" (writer, mediaCapacity) = _getWriter(config) print "" print "Please wait, indexing the source directory (this may take a while)..." (dailyDirs, fileList) = _findDailyDirs(config.store.sourceDir) print "===" print "" print "The following daily staging directories have not yet been written to disc:" print "" for dailyDir in dailyDirs: print " %s" % dailyDir totalSize = fileList.totalSize() print "" print "The total size of the data in these directories is %s." % displayBytes(totalSize) print "" if not _getYesNoAnswer("Continue?", default="Y"): return print "===" print "" print "Based on configuration, the capacity of your media is %s." % displayBytes(mediaCapacity) print "" print "Since estimates are not perfect and there is some uncertainly in" print "media capacity calculations, it is good to have a \"cushion\"," print "a percentage of capacity to set aside. The cushion reduces the" print "capacity of your media, so a 1.5% cushion leaves 98.5% remaining." print "" cushion = _getFloat("What cushion percentage?", default=4.5) print "===" realCapacity = ((100.0 - cushion)/100.0) * mediaCapacity minimumDiscs = (totalSize/realCapacity) + 1 print "" print "The real capacity, taking into account the %.2f%% cushion, is %s." % (cushion, displayBytes(realCapacity)) print "It will take at least %d disc(s) to store your %s of data." % (minimumDiscs, displayBytes(totalSize)) print "" if not _getYesNoAnswer("Continue?", default="Y"): return print "===" happy = False while not happy: print "" print "Which algorithm do you want to use to span your data across" print "multiple discs?" print "" print "The following algorithms are available:" print "" print " first....: The \"first-fit\" algorithm" print " best.....: The \"best-fit\" algorithm" print " worst....: The \"worst-fit\" algorithm" print " alternate: The \"alternate-fit\" algorithm" print "" print "If you don't like the results you will have a chance to try a" print "different one later." print "" algorithm = _getChoiceAnswer("Which algorithm?", "worst", [ "first", "best", "worst", "alternate", ]) print "===" print "" print "Please wait, generating file lists (this may take a while)..." spanSet = fileList.generateSpan(capacity=realCapacity, algorithm="%s_fit" % algorithm) print "===" print "" print "Using the \"%s-fit\" algorithm, Cedar Backup can split your data" % algorithm print "into %d discs." % len(spanSet) print "" counter = 0 for item in spanSet: counter += 1 print "Disc %d: %d files, %s, %.2f%% utilization" % (counter, len(item.fileList), displayBytes(item.size), item.utilization) print "" if _getYesNoAnswer("Accept this solution?", default="Y"): happy = True print "===" counter = 0 for spanItem in spanSet: counter += 1 if counter == 1: print "" _getReturn("Please place the first disc in your backup device.\nPress return when ready.") print "===" else: print "" _getReturn("Please replace the disc in your backup device.\nPress return when ready.") print "===" _writeDisc(config, writer, spanItem) _writeStoreIndicator(config, dailyDirs) print "" print "Completed writing all discs." ############################ # _findDailyDirs() function ############################ def _findDailyDirs(stagingDir): """ Returns a list of all daily staging directories that have not yet been stored. The store indicator file C{cback.store} will be written to a daily staging directory once that directory is written to disc. So, this function looks at each daily staging directory within the configured staging directory, and returns a list of those which do not contain the indicator file. Returned is a tuple containing two items: a list of daily staging directories, and a BackupFileList containing all files among those staging directories. @param stagingDir: Configured staging directory @return: Tuple (staging dirs, backup file list) """ results = findDailyDirs(stagingDir, STORE_INDICATOR) fileList = BackupFileList() for item in results: fileList.addDirContents(item) return (results, fileList) ################################## # _writeStoreIndicator() function ################################## def _writeStoreIndicator(config, dailyDirs): """ Writes a store indicator file into daily directories. @param config: Config object. @param dailyDirs: List of daily directories """ for dailyDir in dailyDirs: writeIndicatorFile(dailyDir, STORE_INDICATOR, config.options.backupUser, config.options.backupGroup) ######################## # _getWriter() function ######################## def _getWriter(config): """ Gets a writer and media capacity from store configuration. Returned is a writer and a media capacity in bytes. @param config: Cedar Backup configuration @return: Tuple of (writer, mediaCapacity) """ writer = createWriter(config) mediaCapacity = convertSize(writer.media.capacity, UNIT_SECTORS, UNIT_BYTES) return (writer, mediaCapacity) ######################## # _writeDisc() function ######################## def _writeDisc(config, writer, spanItem): """ Writes a span item to disc. @param config: Cedar Backup configuration @param writer: Writer to use @param spanItem: Span item to write """ print "" _discInitializeImage(config, writer, spanItem) _discWriteImage(config, writer) _discConsistencyCheck(config, writer, spanItem) print "Write process is complete." print "===" def _discInitializeImage(config, writer, spanItem): """ Initialize an ISO image for a span item. @param config: Cedar Backup configuration @param writer: Writer to use @param spanItem: Span item to write """ complete = False while not complete: try: print "Initializing image..." writer.initializeImage(newDisc=True, tmpdir=config.options.workingDir) for path in spanItem.fileList: graftPoint = os.path.dirname(path.replace(config.store.sourceDir, "", 1)) writer.addImageEntry(path, graftPoint) complete = True except KeyboardInterrupt, e: raise e except Exception, e: logger.error("Failed to initialize image: %s", e) if not _getYesNoAnswer("Retry initialization step?", default="Y"): raise e print "Ok, attempting retry." print "===" print "Completed initializing image." def _discWriteImage(config, writer): """ Writes a ISO image for a span item. @param config: Cedar Backup configuration @param writer: Writer to use """ complete = False while not complete: try: print "Writing image to disc..." writer.writeImage() complete = True except KeyboardInterrupt, e: raise e except Exception, e: logger.error("Failed to write image: %s", e) if not _getYesNoAnswer("Retry this step?", default="Y"): raise e print "Ok, attempting retry." _getReturn("Please replace media if needed.\nPress return when ready.") print "===" print "Completed writing image." def _discConsistencyCheck(config, writer, spanItem): """ Run a consistency check on an ISO image for a span item. @param config: Cedar Backup configuration @param writer: Writer to use @param spanItem: Span item to write """ if config.store.checkData: complete = False while not complete: try: print "Running consistency check..." _consistencyCheck(config, spanItem.fileList) complete = True except KeyboardInterrupt, e: raise e except Exception, e: logger.error("Consistency check failed: %s", e) if not _getYesNoAnswer("Retry the consistency check?", default="Y"): raise e if _getYesNoAnswer("Rewrite the disc first?", default="N"): print "Ok, attempting retry." _getReturn("Please replace the disc in your backup device.\nPress return when ready.") print "===" _discWriteImage(config, writer) else: print "Ok, attempting retry." print "===" print "Completed consistency check." ############################### # _consistencyCheck() function ############################### def _consistencyCheck(config, fileList): """ Runs a consistency check against media in the backup device. The function mounts the device at a temporary mount point in the working directory, and then compares the passed-in file list's digest map with the one generated from the disc. The two lists should be identical. If no exceptions are thrown, there were no problems with the consistency check. @warning: The implementation of this function is very UNIX-specific. @param config: Config object. @param fileList: BackupFileList whose contents to check against @raise ValueError: If the check fails @raise IOError: If there is a problem working with the media. """ logger.debug("Running consistency check.") mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) try: mount(config.store.devicePath, mountPoint, "iso9660") discList = BackupFileList() discList.addDirContents(mountPoint) sourceList = BackupFileList() sourceList.extend(fileList) discListDigest = discList.generateDigestMap(stripPrefix=normalizeDir(mountPoint)) sourceListDigest = sourceList.generateDigestMap(stripPrefix=normalizeDir(config.store.sourceDir)) compareDigestMaps(sourceListDigest, discListDigest, verbose=True) logger.info("Consistency check completed. No problems found.") finally: unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done ######################################################################### # User interface utilities ######################################################################## def _getYesNoAnswer(prompt, default): """ Get a yes/no answer from the user. The default will be placed at the end of the prompt. A "Y" or "y" is considered yes, anything else no. A blank (empty) response results in the default. @param prompt: Prompt to show. @param default: Default to set if the result is blank @return: Boolean true/false corresponding to Y/N """ if default == "Y": prompt = "%s [Y/n]: " % prompt else: prompt = "%s [y/N]: " % prompt answer = raw_input(prompt) if answer in [ None, "", ]: answer = default if answer[0] in [ "Y", "y", ]: return True else: return False def _getChoiceAnswer(prompt, default, validChoices): """ Get a particular choice from the user. The default will be placed at the end of the prompt. The function loops until getting a valid choice. A blank (empty) response results in the default. @param prompt: Prompt to show. @param default: Default to set if the result is None or blank. @param validChoices: List of valid choices (strings) @return: Valid choice from user. """ prompt = "%s [%s]: " % (prompt, default) answer = raw_input(prompt) if answer in [ None, "", ]: answer = default while answer not in validChoices: print "Choice must be one of %s" % validChoices answer = raw_input(prompt) return answer def _getFloat(prompt, default): """ Get a floating point number from the user. The default will be placed at the end of the prompt. The function loops until getting a valid floating point number. A blank (empty) response results in the default. @param prompt: Prompt to show. @param default: Default to set if the result is None or blank. @return: Floating point number from user """ prompt = "%s [%.2f]: " % (prompt, default) while True: answer = raw_input(prompt) if answer in [ None, "" ]: return default else: try: return float(answer) except ValueError: print "Enter a floating point number." def _getReturn(prompt): """ Get a return key from the user. @param prompt: Prompt to show. """ raw_input(prompt) ######################################################################### # Main routine ######################################################################## if __name__ == "__main__": sys.exit(cli()) CedarBackup2-2.26.5/CedarBackup2/tools/__init__.py0000664000175000017500000000333712560016766023274 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Official Cedar Backup Tools # Purpose : Provides package initialization # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Official Cedar Backup Tools This package provides official Cedar Backup tools. Tools are things that feel a little like extensions, but don't fit the normal mold of extensions. For instance, they might not be intended to run from cron, or might need to interact dynamically with the user (i.e. accept user input). Tools are usually scripts that are run directly from the command line, just like the main C{cback} script. Like the C{cback} script, the majority of a tool is implemented in a .py module, and then the script just invokes the module's C{cli()} function. The actual scripts for tools are distributed in the util/ directory. @author: Kenneth J. Pronovici """ ######################################################################## # Package initialization ######################################################################## # Using 'from CedarBackup2.tools import *' will just import the modules listed # in the __all__ variable. __all__ = [ 'span', 'amazons3', ] CedarBackup2-2.26.5/CedarBackup2/tools/amazons3.py0000775000175000017500000012452312642021101023250 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2014 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Cedar Backup tool to synchronize an Amazon S3 bucket. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Synchonizes a local directory with an Amazon S3 bucket. No configuration is required; all necessary information is taken from the command-line. The only thing configuration would help with is the path resolver interface, and it doesn't seem worth it to require configuration just to get that. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules and constants ######################################################################## # System modules import sys import os import logging import getopt import json import warnings import chardet # Cedar Backup modules from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT from CedarBackup2.filesystem import FilesystemList from CedarBackup2.cli import setupLogging, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE from CedarBackup2.util import Diagnostics, splitCommandLine, encodePath from CedarBackup2.util import executeCommand ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.tools.amazons3") AWS_COMMAND = [ "aws" ] SHORT_SWITCHES = "hVbql:o:m:OdsDvw" LONG_SWITCHES = [ 'help', 'version', 'verbose', 'quiet', 'logfile=', 'owner=', 'mode=', 'output', 'debug', 'stack', 'diagnostics', 'verifyOnly', 'ignoreWarnings', ] ####################################################################### # Options class ####################################################################### class Options(object): ###################### # Class documentation ###################### """ Class representing command-line options for the cback-amazons3-sync script. The C{Options} class is a Python object representation of the command-line options of the cback script. The object representation is two-way: a command line string or a list of command line arguments can be used to create an C{Options} object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An C{Options} object can even be created from scratch programmatically (if you have a need for that). There are two main levels of validation in the C{Options} class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's C{property} functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a C{ValueError} exception when making assignments to fields if you are programmatically filling an object. The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc. All of these post-completion validations are encapsulated in the L{Options.validate} method. This method can be called at any time by a client, and will always be called immediately after creating a C{Options} object from a command line and before exporting a C{Options} object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__ """ ############## # Constructor ############## def __init__(self, argumentList=None, argumentString=None, validate=True): """ Initializes an options object. If you initialize the object without passing either C{argumentList} or C{argumentString}, the object will be empty and will be invalid until it is filled in properly. No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. The argument list is assumed to be a list of arguments, not including the name of the command, something like C{sys.argv[1:]}. If you pass C{sys.argv} instead, things are not going to work. The argument string will be parsed into an argument list by the L{util.splitCommandLine} function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to C{sys.argv[1:]}, just like C{argumentList}. Unless the C{validate} argument is C{False}, the L{Options.validate} method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in command line, so an exception might still be raised. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback script. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid command line arguments. @param argumentList: Command line for a program. @type argumentList: List of arguments, i.e. C{sys.argv} @param argumentString: Command line for a program. @type argumentString: String, i.e. "cback --verbose stage store" @param validate: Validate the command line after parsing it. @type validate: Boolean true/false. @raise getopt.GetoptError: If the command-line arguments could not be parsed. @raise ValueError: If the command-line arguments are invalid. """ self._help = False self._version = False self._verbose = False self._quiet = False self._logfile = None self._owner = None self._mode = None self._output = False self._debug = False self._stacktrace = False self._diagnostics = False self._verifyOnly = False self._ignoreWarnings = False self._sourceDir = None self._s3BucketUrl = None if argumentList is not None and argumentString is not None: raise ValueError("Use either argumentList or argumentString, but not both.") if argumentString is not None: argumentList = splitCommandLine(argumentString) if argumentList is not None: self._parseArgumentList(argumentList) if validate: self.validate() ######################### # String representations ######################### def __repr__(self): """ Official string representation for class instance. """ return self.buildArgumentString(validate=False) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() ############################# # Standard comparison method ############################# def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.help != other.help: if self.help < other.help: return -1 else: return 1 if self.version != other.version: if self.version < other.version: return -1 else: return 1 if self.verbose != other.verbose: if self.verbose < other.verbose: return -1 else: return 1 if self.quiet != other.quiet: if self.quiet < other.quiet: return -1 else: return 1 if self.logfile != other.logfile: if self.logfile < other.logfile: return -1 else: return 1 if self.owner != other.owner: if self.owner < other.owner: return -1 else: return 1 if self.mode != other.mode: if self.mode < other.mode: return -1 else: return 1 if self.output != other.output: if self.output < other.output: return -1 else: return 1 if self.debug != other.debug: if self.debug < other.debug: return -1 else: return 1 if self.stacktrace != other.stacktrace: if self.stacktrace < other.stacktrace: return -1 else: return 1 if self.diagnostics != other.diagnostics: if self.diagnostics < other.diagnostics: return -1 else: return 1 if self.verifyOnly != other.verifyOnly: if self.verifyOnly < other.verifyOnly: return -1 else: return 1 if self.ignoreWarnings != other.ignoreWarnings: if self.ignoreWarnings < other.ignoreWarnings: return -1 else: return 1 if self.sourceDir != other.sourceDir: if self.sourceDir < other.sourceDir: return -1 else: return 1 if self.s3BucketUrl != other.s3BucketUrl: if self.s3BucketUrl < other.s3BucketUrl: return -1 else: return 1 return 0 ############# # Properties ############# def _setHelp(self, value): """ Property target used to set the help flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._help = True else: self._help = False def _getHelp(self): """ Property target used to get the help flag. """ return self._help def _setVersion(self, value): """ Property target used to set the version flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._version = True else: self._version = False def _getVersion(self): """ Property target used to get the version flag. """ return self._version def _setVerbose(self, value): """ Property target used to set the verbose flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._verbose = True else: self._verbose = False def _getVerbose(self): """ Property target used to get the verbose flag. """ return self._verbose def _setQuiet(self, value): """ Property target used to set the quiet flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._quiet = True else: self._quiet = False def _getQuiet(self): """ Property target used to get the quiet flag. """ return self._quiet def _setLogfile(self, value): """ Property target used to set the logfile parameter. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if len(value) < 1: raise ValueError("The logfile parameter must be a non-empty string.") self._logfile = encodePath(value) def _getLogfile(self): """ Property target used to get the logfile parameter. """ return self._logfile def _setOwner(self, value): """ Property target used to set the owner parameter. If not C{None}, the owner must be a C{(user,group)} tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple. @raise ValueError: If the value is not valid. """ if value is None: self._owner = None else: if isinstance(value, str): raise ValueError("Must specify user and group tuple for owner parameter.") if len(value) != 2: raise ValueError("Must specify user and group tuple for owner parameter.") if len(value[0]) < 1 or len(value[1]) < 1: raise ValueError("User and group tuple values must be non-empty strings.") self._owner = (value[0], value[1]) def _getOwner(self): """ Property target used to get the owner parameter. The parameter is a tuple of C{(user, group)}. """ return self._owner def _setMode(self, value): """ Property target used to set the mode parameter. """ if value is None: self._mode = None else: try: if isinstance(value, str): value = int(value, 8) else: value = int(value) except TypeError: raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") if value < 0: raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") self._mode = value def _getMode(self): """ Property target used to get the mode parameter. """ return self._mode def _setOutput(self, value): """ Property target used to set the output flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._output = True else: self._output = False def _getOutput(self): """ Property target used to get the output flag. """ return self._output def _setDebug(self, value): """ Property target used to set the debug flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._debug = True else: self._debug = False def _getDebug(self): """ Property target used to get the debug flag. """ return self._debug def _setStacktrace(self, value): """ Property target used to set the stacktrace flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._stacktrace = True else: self._stacktrace = False def _getStacktrace(self): """ Property target used to get the stacktrace flag. """ return self._stacktrace def _setDiagnostics(self, value): """ Property target used to set the diagnostics flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._diagnostics = True else: self._diagnostics = False def _getDiagnostics(self): """ Property target used to get the diagnostics flag. """ return self._diagnostics def _setVerifyOnly(self, value): """ Property target used to set the verifyOnly flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._verifyOnly = True else: self._verifyOnly = False def _getVerifyOnly(self): """ Property target used to get the verifyOnly flag. """ return self._verifyOnly def _setIgnoreWarnings(self, value): """ Property target used to set the ignoreWarnings flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._ignoreWarnings = True else: self._ignoreWarnings = False def _getIgnoreWarnings(self): """ Property target used to get the ignoreWarnings flag. """ return self._ignoreWarnings def _setSourceDir(self, value): """ Property target used to set the sourceDir parameter. """ if value is not None: if len(value) < 1: raise ValueError("The sourceDir parameter must be a non-empty string.") self._sourceDir = value def _getSourceDir(self): """ Property target used to get the sourceDir parameter. """ return self._sourceDir def _setS3BucketUrl(self, value): """ Property target used to set the s3BucketUrl parameter. """ if value is not None: if len(value) < 1: raise ValueError("The s3BucketUrl parameter must be a non-empty string.") self._s3BucketUrl = value def _getS3BucketUrl(self): """ Property target used to get the s3BucketUrl parameter. """ return self._s3BucketUrl help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") verifyOnly = property(_getVerifyOnly, _setVerifyOnly, None, "Command-line verifyOnly (C{-v,--verifyOnly}) flag.") ignoreWarnings = property(_getIgnoreWarnings, _setIgnoreWarnings, None, "Command-line ignoreWarnings (C{-w,--ignoreWarnings}) flag.") sourceDir = property(_getSourceDir, _setSourceDir, None, "Command-line sourceDir, source of sync.") s3BucketUrl = property(_getS3BucketUrl, _setS3BucketUrl, None, "Command-line s3BucketUrl, target of sync.") ################## # Utility methods ################## def validate(self): """ Validates command-line options represented by the object. Unless C{--help} or C{--version} are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality. @note: The command line format is specified by the L{_usage} function. Call L{_usage} to see a usage statement for the cback script. @raise ValueError: If one of the validations fails. """ if not self.help and not self.version and not self.diagnostics: if self.sourceDir is None or self.s3BucketUrl is None: raise ValueError("Source directory and S3 bucket URL are both required.") def buildArgumentList(self, validate=True): """ Extracts options into a list of command line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the C{argumentList} parameter. Unlike L{buildArgumentString}, string arguments are not quoted here, because there is no need for it. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: List representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentList = [] if self._help: argumentList.append("--help") if self.version: argumentList.append("--version") if self.verbose: argumentList.append("--verbose") if self.quiet: argumentList.append("--quiet") if self.logfile is not None: argumentList.append("--logfile") argumentList.append(self.logfile) if self.owner is not None: argumentList.append("--owner") argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) if self.mode is not None: argumentList.append("--mode") argumentList.append("%o" % self.mode) if self.output: argumentList.append("--output") if self.debug: argumentList.append("--debug") if self.stacktrace: argumentList.append("--stack") if self.diagnostics: argumentList.append("--diagnostics") if self.verifyOnly: argumentList.append("--verifyOnly") if self.ignoreWarnings: argumentList.append("--ignoreWarnings") if self.sourceDir is not None: argumentList.append(self.sourceDir) if self.s3BucketUrl is not None: argumentList.append(self.s3BucketUrl) return argumentList def buildArgumentString(self, validate=True): """ Extracts options into a string of command-line arguments. The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes (C{"}). The resulting string will be suitable for passing back to the constructor in the C{argumentString} parameter. Unless the C{validate} parameter is C{False}, the L{Options.validate} method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to extract an invalid command line. @param validate: Validate the options before extracting the command line. @type validate: Boolean true/false. @return: String representation of command-line arguments. @raise ValueError: If options within the object are invalid. """ if validate: self.validate() argumentString = "" if self._help: argumentString += "--help " if self.version: argumentString += "--version " if self.verbose: argumentString += "--verbose " if self.quiet: argumentString += "--quiet " if self.logfile is not None: argumentString += "--logfile \"%s\" " % self.logfile if self.owner is not None: argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) if self.mode is not None: argumentString += "--mode %o " % self.mode if self.output: argumentString += "--output " if self.debug: argumentString += "--debug " if self.stacktrace: argumentString += "--stack " if self.diagnostics: argumentString += "--diagnostics " if self.verifyOnly: argumentString += "--verifyOnly " if self.ignoreWarnings: argumentString += "--ignoreWarnings " if self.sourceDir is not None: argumentString += "\"%s\" " % self.sourceDir if self.s3BucketUrl is not None: argumentString += "\"%s\" " % self.s3BucketUrl return argumentString def _parseArgumentList(self, argumentList): """ Internal method to parse a list of command-line arguments. Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the L{validate} method). For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used. @param argumentList: List of arguments to a command. @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} @raise ValueError: If the argument list cannot be successfully parsed. """ switches = { } opts, remaining = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) for o, a in opts: # push the switches into a hash switches[o] = a if switches.has_key("-h") or switches.has_key("--help"): self.help = True if switches.has_key("-V") or switches.has_key("--version"): self.version = True if switches.has_key("-b") or switches.has_key("--verbose"): self.verbose = True if switches.has_key("-q") or switches.has_key("--quiet"): self.quiet = True if switches.has_key("-l"): self.logfile = switches["-l"] if switches.has_key("--logfile"): self.logfile = switches["--logfile"] if switches.has_key("-o"): self.owner = switches["-o"].split(":", 1) if switches.has_key("--owner"): self.owner = switches["--owner"].split(":", 1) if switches.has_key("-m"): self.mode = switches["-m"] if switches.has_key("--mode"): self.mode = switches["--mode"] if switches.has_key("-O") or switches.has_key("--output"): self.output = True if switches.has_key("-d") or switches.has_key("--debug"): self.debug = True if switches.has_key("-s") or switches.has_key("--stack"): self.stacktrace = True if switches.has_key("-D") or switches.has_key("--diagnostics"): self.diagnostics = True if switches.has_key("-v") or switches.has_key("--verifyOnly"): self.verifyOnly = True if switches.has_key("-w") or switches.has_key("--ignoreWarnings"): self.ignoreWarnings = True try: (self.sourceDir, self.s3BucketUrl) = remaining except ValueError: pass ####################################################################### # Public functions ####################################################################### ################# # cli() function ################# def cli(): """ Implements the command-line interface for the C{cback-amazons3-sync} script. Essentially, this is the "main routine" for the cback-amazons3-sync script. It does all of the argument processing for the script, and then also implements the tool functionality. This function looks pretty similiar to C{CedarBackup2.cli.cli()}. It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication. A different error code is returned for each type of failure: - C{1}: The Python interpreter version is < 2.7 - C{2}: Error processing command-line arguments - C{3}: Error configuring logging - C{5}: Backup was interrupted with a CTRL-C or similar - C{6}: Error executing other parts of the script @note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively. @return: Error code as described above. """ try: if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 7]: sys.stderr.write("Python 2 version 2.7 or greater required.\n") return 1 except: # sys.version_info isn't available before 2.0 sys.stderr.write("Python 2 version 2.7 or greater required.\n") return 1 try: options = Options(argumentList=sys.argv[1:]) except Exception, e: _usage() sys.stderr.write(" *** Error: %s\n" % e) return 2 if options.help: _usage() return 0 if options.version: _version() return 0 if options.diagnostics: _diagnostics() return 0 if options.stacktrace: logfile = setupLogging(options) else: try: logfile = setupLogging(options) except Exception as e: sys.stderr.write("Error setting up logging: %s\n" % e) return 3 logger.info("Cedar Backup Amazon S3 sync run started.") logger.info("Options were [%s]", options) logger.info("Logfile is [%s]", logfile) Diagnostics().logDiagnostics(method=logger.info) if options.stacktrace: _executeAction(options) else: try: _executeAction(options) except KeyboardInterrupt: logger.error("Backup interrupted.") logger.info("Cedar Backup Amazon S3 sync run completed with status 5.") return 5 except Exception, e: logger.error("Error executing backup: %s", e) logger.info("Cedar Backup Amazon S3 sync run completed with status 6.") return 6 logger.info("Cedar Backup Amazon S3 sync run completed with status 0.") return 0 ####################################################################### # Utility functions ####################################################################### #################### # _usage() function #################### def _usage(fd=sys.stderr): """ Prints usage information for the cback-amazons3-sync script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Usage: cback-amazons3-sync [switches] sourceDir s3bucketUrl\n") fd.write("\n") fd.write(" Cedar Backup Amazon S3 sync tool.\n") fd.write("\n") fd.write(" This Cedar Backup utility synchronizes a local directory to an Amazon S3\n") fd.write(" bucket. After the sync is complete, a validation step is taken. An\n") fd.write(" error is reported if the contents of the bucket do not match the\n") fd.write(" source directory, or if the indicated size for any file differs.\n") fd.write(" This tool is a wrapper over the AWS CLI command-line tool.\n") fd.write("\n") fd.write(" The following arguments are required:\n") fd.write("\n") fd.write(" sourceDir The local source directory on disk (must exist)\n") fd.write(" s3BucketUrl The URL to the target Amazon S3 bucket\n") fd.write("\n") fd.write(" The following switches are accepted:\n") fd.write("\n") fd.write(" -h, --help Display this usage/help listing\n") fd.write(" -V, --version Display version information\n") fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) fd.write(" -O, --output Record some sub-command (i.e. aws) output to the log\n") fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") fd.write(" -s, --stack Dump Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") fd.write(" -v, --verifyOnly Only verify the S3 bucket contents, do not make changes\n") fd.write(" -w, --ignoreWarnings Ignore warnings about problematic filename encodings\n") fd.write("\n") fd.write(" Typical usage would be something like:\n") fd.write("\n") fd.write(" cback-amazons3-sync /home/myuser s3://example.com-backup/myuser\n") fd.write("\n") fd.write(" This will sync the contents of /home/myuser into the indicated bucket.\n") fd.write("\n") ###################### # _version() function ###################### def _version(fd=sys.stdout): """ Prints version information for the cback script. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write(" Cedar Backup Amazon S3 sync tool.\n") fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) fd.write("\n") fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) fd.write(" See CREDITS for a list of included code and other contributors.\n") fd.write(" This is free software; there is NO warranty. See the\n") fd.write(" GNU General Public License version 2 for copying conditions.\n") fd.write("\n") fd.write(" Use the --help option for usage information.\n") fd.write("\n") ########################## # _diagnostics() function ########################## def _diagnostics(fd=sys.stdout): """ Prints runtime diagnostics information. @param fd: File descriptor used to print information. @note: The C{fd} is used rather than C{print} to facilitate unit testing. """ fd.write("\n") fd.write("Diagnostics:\n") fd.write("\n") Diagnostics().printDiagnostics(fd=fd, prefix=" ") fd.write("\n") ############################ # _executeAction() function ############################ def _executeAction(options): """ Implements the guts of the cback-amazons3-sync tool. @param options: Program command-line options. @type options: Options object. @raise Exception: Under many generic error conditions """ sourceFiles = _buildSourceFiles(options.sourceDir) if not options.ignoreWarnings: _checkSourceFiles(options.sourceDir, sourceFiles) if not options.verifyOnly: _synchronizeBucket(options.sourceDir, options.s3BucketUrl) _verifyBucketContents(options.sourceDir, sourceFiles, options.s3BucketUrl) ################################ # _buildSourceFiles() function ################################ def _buildSourceFiles(sourceDir): """ Build a list of files in a source directory @param sourceDir: Local source directory @return: FilesystemList with contents of source directory """ if not os.path.isdir(sourceDir): raise ValueError("Source directory does not exist on disk.") sourceFiles = FilesystemList() sourceFiles.addDirContents(sourceDir) return sourceFiles ############################### # _checkSourceFiles() function ############################### def _checkSourceFiles(sourceDir, sourceFiles): """ Check source files, trying to guess which ones will have encoding problems. @param sourceDir: Local source directory @param sourceDir: Local source directory @raises ValueError: If a problem file is found @see U{http://opensourcehacker.com/2011/09/16/fix-linux-filename-encodings-with-python/} @see U{http://serverfault.com/questions/82821/how-to-tell-the-language-encoding-of-a-filename-on-linux} @see U{http://randysofia.com/2014/06/06/aws-cli-and-your-locale/} """ with warnings.catch_warnings(): warnings.simplefilter("ignore") # So we don't print unicode warnings from comparisons encoding = Diagnostics().encoding failed = False for entry in sourceFiles: result = chardet.detect(entry) source = entry.decode(result["encoding"]) try: target = source.encode(encoding) if source != target: logger.error("Inconsistent encoding for [%s]: got %s, but need %s", entry, result["encoding"], encoding) failed = True except UnicodeEncodeError: logger.error("Inconsistent encoding for [%s]: got %s, but need %s", entry, result["encoding"], encoding) failed = True if not failed: logger.info("Completed checking source filename encoding (no problems found).") else: logger.error("Some filenames have inconsistent encodings and will likely cause sync problems.") logger.error("You may be able to fix this by setting a more sensible locale in your environment.") logger.error("Aternately, you can rename the problem files to be valid in the indicated locale.") logger.error("To ignore this warning and proceed anyway, use --ignoreWarnings") raise ValueError("Some filenames have inconsistent encodings and will likely cause sync problems.") ################################ # _synchronizeBucket() function ################################ def _synchronizeBucket(sourceDir, s3BucketUrl): """ Synchronize a local directory to an Amazon S3 bucket. @param sourceDir: Local source directory @param s3BucketUrl: Target S3 bucket URL """ logger.info("Synchronizing local source directory up to Amazon S3.") args = [ "s3", "sync", sourceDir, s3BucketUrl, "--delete", "--recursive", ] result = executeCommand(AWS_COMMAND, args, returnOutput=False)[0] if result != 0: raise IOError("Error [%d] calling AWS CLI synchronize bucket." % result) ################################### # _verifyBucketContents() function ################################### def _verifyBucketContents(sourceDir, sourceFiles, s3BucketUrl): """ Verify that a source directory is equivalent to an Amazon S3 bucket. @param sourceDir: Local source directory @param sourceFiles: Filesystem list containing contents of source directory @param s3BucketUrl: Target S3 bucket URL """ # As of this writing, the documentation for the S3 API that we're using # below says that up to 1000 elements at a time are returned, and that we # have to manually handle pagination by looking for the IsTruncated element. # However, in practice, this is not true. I have been testing with # "aws-cli/1.4.4 Python/2.7.3 Linux/3.2.0-4-686-pae", installed through PIP. # No matter how many items exist in my bucket and prefix, I get back a # single JSON result. I've tested with buckets containing nearly 6000 # elements. # # If I turn on debugging, it's clear that underneath, something in the API # is executing multiple list-object requests against AWS, and stiching # results together to give me back the final JSON result. The debug output # clearly incldues multiple requests, and each XML response (except for the # final one) contains true. # # This feature is not mentioned in the offical changelog for any of the # releases going back to 1.0.0. It appears to happen in the botocore # library, but I'll admit I can't actually find the code that implements it. # For now, all I can do is rely on this behavior and hope that the # documentation is out-of-date. I'm not going to write code that tries to # parse out IsTruncated if I can't actually test that code. (bucket, prefix) = s3BucketUrl.replace("s3://", "").split("/", 1) query = "Contents[].{Key: Key, Size: Size}" args = [ "s3api", "list-objects", "--bucket", bucket, "--prefix", prefix, "--query", query, ] (result, data) = executeCommand(AWS_COMMAND, args, returnOutput=True) if result != 0: raise IOError("Error [%d] calling AWS CLI verify bucket contents." % result) contents = { } for entry in json.loads("".join(data)): key = entry["Key"].replace(prefix, "") size = long(entry["Size"]) contents[key] = size failed = False for entry in sourceFiles: if os.path.isfile(entry): key = entry.replace(sourceDir, "") size = long(os.stat(entry).st_size) if not key in contents: logger.error("File was apparently not uploaded: [%s]", entry) failed = True else: if size != contents[key]: logger.error("File size differs [%s]: expected %s bytes but got %s bytes", entry, size, contents[key]) failed = True if not failed: logger.info("Completed verifying Amazon S3 bucket contents (no problems found).") else: logger.error("There were differences between source directory and target S3 bucket.") raise ValueError("There were differences between source directory and target S3 bucket.") ######################################################################### # Main routine ######################################################################## if __name__ == "__main__": sys.exit(cli()) CedarBackup2-2.26.5/CedarBackup2/customize.py0000664000175000017500000000661112560016766022415 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements customized behavior. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Implements customized behavior. Some behaviors need to vary when packaged for certain platforms. For instance, while Cedar Backup generally uses cdrecord and mkisofs, Debian ships compatible utilities called wodim and genisoimage. I want there to be one single place where Cedar Backup is patched for Debian, rather than having to maintain a variety of patches in different places. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import logging ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.customize") PLATFORM = "standard" #PLATFORM = "debian" DEBIAN_CDRECORD = "/usr/bin/wodim" DEBIAN_MKISOFS = "/usr/bin/genisoimage" ####################################################################### # Public functions ####################################################################### ################################ # customizeOverrides() function ################################ def customizeOverrides(config, platform=PLATFORM): """ Modify command overrides based on the configured platform. On some platforms, we want to add command overrides to configuration. Each override will only be added if the configuration does not already contain an override with the same name. That way, the user still has a way to choose their own version of the command if they want. @param config: Configuration to modify @param platform: Platform that is in use """ if platform == "debian": logger.info("Overriding cdrecord for Debian platform: %s", DEBIAN_CDRECORD) config.options.addOverride("cdrecord", DEBIAN_CDRECORD) logger.info("Overriding mkisofs for Debian platform: %s", DEBIAN_MKISOFS) config.options.addOverride("mkisofs", DEBIAN_MKISOFS) CedarBackup2-2.26.5/CedarBackup2/config.py0000664000175000017500000067633612642024401021643 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides configuration-related objects. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Provides configuration-related objects. Summary ======= Cedar Backup stores all of its configuration in an XML document typically called C{cback.conf}. The standard location for this document is in C{/etc}, but users can specify a different location if they want to. The C{Config} class is a Python object representation of a Cedar Backup XML configuration file. The representation is two-way: XML data can be used to create a C{Config} object, and then changes to the object can be propogated back to disk. A C{Config} object can even be used to create a configuration file from scratch programmatically. The C{Config} class is intended to be the only Python-language interface to Cedar Backup configuration on disk. Cedar Backup will use the class as its internal representation of configuration, and applications external to Cedar Backup itself (such as a hypothetical third-party configuration tool written in Python or a third party extension module) should also use the class when they need to read and write configuration files. Backwards Compatibility ======================= The configuration file format has changed between Cedar Backup 1.x and Cedar Backup 2.x. Any Cedar Backup 1.x configuration file is also a valid Cedar Backup 2.x configuration file. However, it doesn't work to go the other direction, as the 2.x configuration files contains additional configuration is not accepted by older versions of the software. XML Configuration Structure =========================== A C{Config} object can either be created "empty", or can be created based on XML input (either in the form of a string or read in from a file on disk). Generally speaking, the XML input I{must} result in a C{Config} object which passes the validations laid out below in the I{Validation} section. An XML configuration file is composed of seven sections: - I{reference}: specifies reference information about the file (author, revision, etc) - I{extensions}: specifies mappings to Cedar Backup extensions (external code) - I{options}: specifies global configuration options - I{peers}: specifies the set of peers in a master's backup pool - I{collect}: specifies configuration related to the collect action - I{stage}: specifies configuration related to the stage action - I{store}: specifies configuration related to the store action - I{purge}: specifies configuration related to the purge action Each section is represented by an class in this module, and then the overall C{Config} class is a composition of the various other classes. Any configuration section that is missing in the XML document (or has not been filled into an "empty" document) will just be set to C{None} in the object representation. The same goes for individual fields within each configuration section. Keep in mind that the document might not be completely valid if some sections or fields aren't filled in - but that won't matter until validation takes place (see the I{Validation} section below). Unicode vs. String Data ======================= By default, all string data that comes out of XML documents in Python is unicode data (i.e. C{u"whatever"}). This is fine for many things, but when it comes to filesystem paths, it can cause us some problems. We really want strings to be encoded in the filesystem encoding rather than being unicode. So, most elements in configuration which represent filesystem paths are coverted to plain strings using L{util.encodePath}. The main exception is the various C{absoluteExcludePath} and C{relativeExcludePath} lists. These are I{not} converted, because they are generally only used for filtering, not for filesystem operations. Validation ========== There are two main levels of validation in the C{Config} class and its children. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's C{property} functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a C{ValueError} exception when making assignments to configuration class fields. The second level of validation is post-completion validation. Certain validations don't make sense until a document is fully "complete". We don't want these validations to apply all of the time, because it would make building up a document from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc. All of these post-completion validations are encapsulated in the L{Config.validate} method. This method can be called at any time by a client, and will always be called immediately after creating a C{Config} object from XML data and before exporting a C{Config} object to XML. This way, we get decent ease-of-use but we also don't accept or emit invalid configuration files. The L{Config.validate} implementation actually takes two passes to completely validate a configuration document. The first pass at validation is to ensure that the proper sections are filled into the document. There are default requirements, but the caller has the opportunity to override these defaults. The second pass at validation ensures that any filled-in section contains valid data. Any section which is not set to C{None} is validated according to the rules for that section (see below). I{Reference Validations} No validations. I{Extensions Validations} The list of actions may be either C{None} or an empty list C{[]} if desired. Each extended action must include a name, a module and a function. Then, an extended action must include either an index or dependency information. Which one is required depends on which order mode is configured. I{Options Validations} All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose. I{Peers Validations} Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section. I{Collect Validations} The target directory must be filled in. The collect mode, archive mode and ignore file are all optional. The list of absolute paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent C{CollectConfig} object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the C{CollectConfig} object to make the complete list for a given directory. I{Stage Validations} The target directory must be filled in. There must be at least one peer (remote or local) between the two lists of peers. A list with no entries can be either C{None} or an empty list C{[]} if desired. If a set of peers is provided, this configuration completely overrides configuration in the peers configuration section, and the same validations apply. I{Store Validations} The device type and drive speed are optional, and all other values are required (missing booleans will be set to defaults, which is OK). The image writer functionality in the C{writer} module is supposed to be able to handle a device speed of C{None}. Any caller which needs a "real" (non-C{None}) value for the device type can use C{DEFAULT_DEVICE_TYPE}, which is guaranteed to be sensible. I{Purge Validations} The list of purge directories may be either C{None} or an empty list C{[]} if desired. All purge directories must contain a path and a retain days value. @sort: ActionDependencies, ActionHook, PreActionHook, PostActionHook, ExtendedAction, CommandOverride, CollectFile, CollectDir, PurgeDir, LocalPeer, RemotePeer, ReferenceConfig, ExtensionsConfig, OptionsConfig, PeersConfig, CollectConfig, StageConfig, StoreConfig, PurgeConfig, Config, DEFAULT_DEVICE_TYPE, DEFAULT_MEDIA_TYPE, VALID_DEVICE_TYPES, VALID_MEDIA_TYPES, VALID_COLLECT_MODES, VALID_ARCHIVE_MODES, VALID_ORDER_MODES @var DEFAULT_DEVICE_TYPE: The default device type. @var DEFAULT_MEDIA_TYPE: The default media type. @var VALID_DEVICE_TYPES: List of valid device types. @var VALID_MEDIA_TYPES: List of valid media types. @var VALID_COLLECT_MODES: List of valid collect modes. @var VALID_COMPRESS_MODES: List of valid compress modes. @var VALID_ARCHIVE_MODES: List of valid archive modes. @var VALID_ORDER_MODES: List of valid extension order modes. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## # System modules import os import re import logging # Cedar Backup modules from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed from CedarBackup2.util import UnorderedList, AbsolutePathList, ObjectTypeList, parseCommaSeparatedString from CedarBackup2.util import RegexMatchList, RegexList, encodePath, checkUnique from CedarBackup2.util import convertSize, displayBytes, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild from CedarBackup2.xmlutil import readStringList, readString, readInteger, readBoolean from CedarBackup2.xmlutil import addContainerNode, addStringNode, addIntegerNode, addBooleanNode from CedarBackup2.xmlutil import createInputDom, createOutputDom, serializeDom ######################################################################## # Module-wide constants and variables ######################################################################## logger = logging.getLogger("CedarBackup2.log.config") DEFAULT_DEVICE_TYPE = "cdwriter" DEFAULT_MEDIA_TYPE = "cdrw-74" VALID_DEVICE_TYPES = [ "cdwriter", "dvdwriter", ] VALID_CD_MEDIA_TYPES = [ "cdr-74", "cdrw-74", "cdr-80", "cdrw-80", ] VALID_DVD_MEDIA_TYPES = [ "dvd+r", "dvd+rw", ] VALID_MEDIA_TYPES = VALID_CD_MEDIA_TYPES + VALID_DVD_MEDIA_TYPES VALID_COLLECT_MODES = [ "daily", "weekly", "incr", ] VALID_ARCHIVE_MODES = [ "tar", "targz", "tarbz2", ] VALID_COMPRESS_MODES = [ "none", "gzip", "bzip2", ] VALID_ORDER_MODES = [ "index", "dependency", ] VALID_BLANK_MODES = [ "daily", "weekly", ] VALID_BYTE_UNITS = [ UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, ] VALID_FAILURE_MODES = [ "none", "all", "daily", "weekly", ] REWRITABLE_MEDIA_TYPES = [ "cdrw-74", "cdrw-80", "dvd+rw", ] ACTION_NAME_REGEX = r"^[a-z0-9]*$" ######################################################################## # ByteQuantity class definition ######################################################################## class ByteQuantity(object): """ Class representing a byte quantity. A byte quantity has both a quantity and a byte-related unit. Units are maintained using the constants from util.py. If no units are provided, C{UNIT_BYTES} is assumed. The quantity is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.) Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative quantity of bytes in this context. @sort: __init__, __repr__, __str__, __cmp__, quantity, units, bytes """ def __init__(self, quantity=None, units=None): """ Constructor for the C{ByteQuantity} class. @param quantity: Quantity of bytes, something interpretable as a float @param units: Unit of bytes, one of VALID_BYTE_UNITS @raise ValueError: If one of the values is invalid. """ self._quantity = None self._units = None self.quantity = quantity self.units = units def __repr__(self): """ Official string representation for class instance. """ return "ByteQuantity(%s, %s)" % (self.quantity, self.units) def __str__(self): """ Informal string representation for class instance. """ return "%s" % displayBytes(self.bytes) def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 elif isinstance(other, ByteQuantity): if self.bytes != other.bytes: if self.bytes < other.bytes: return -1 else: return 1 return 0 else: return self.__cmp__(ByteQuantity(other, UNIT_BYTES)) # will fail if other can't be coverted to float def _setQuantity(self, value): """ Property target used to set the quantity The value must be interpretable as a float if it is not None @raise ValueError: If the value is an empty string. @raise ValueError: If the value is not a valid floating point number @raise ValueError: If the value is less than zero """ if value is None: self._quantity = None else: try: floatValue = float(value) # allow integer, float, string, etc. except: raise ValueError("Quantity must be interpretable as a float") if floatValue < 0.0: raise ValueError("Quantity cannot be negative.") self._quantity = str(value) # keep around string def _getQuantity(self): """ Property target used to get the quantity. """ return self._quantity def _setUnits(self, value): """ Property target used to set the units value. If not C{None}, the units value must be one of the values in L{VALID_BYTE_UNITS}. @raise ValueError: If the value is not valid. """ if value is None: self._units = UNIT_BYTES else: if value not in VALID_BYTE_UNITS: raise ValueError("Units value must be one of %s." % VALID_BYTE_UNITS) self._units = value def _getUnits(self): """ Property target used to get the units value. """ return self._units def _getBytes(self): """ Property target used to return the byte quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned. """ if self.quantity is not None and self.units is not None: return convertSize(self.quantity, self.units, UNIT_BYTES) return 0.0 quantity = property(_getQuantity, _setQuantity, None, doc="Byte quantity, as a string") units = property(_getUnits, _setUnits, None, doc="Units for byte quantity, for instance UNIT_BYTES") bytes = property(_getBytes, None, None, doc="Byte quantity, as a floating point number.") ######################################################################## # ActionDependencies class definition ######################################################################## class ActionDependencies(object): """ Class representing dependencies associated with an extended action. Execution ordering for extended actions is done in one of two ways: either by using index values (lower index gets run first) or by having the extended action specify dependencies in terms of other named actions. This class encapsulates the dependency information for an extended action. The following restrictions exist on data in this class: - Any action name must be a non-empty string matching C{ACTION_NAME_REGEX} @sort: __init__, __repr__, __str__, __cmp__, beforeList, afterList """ def __init__(self, beforeList=None, afterList=None): """ Constructor for the C{ActionDependencies} class. @param beforeList: List of named actions that this action must be run before @param afterList: List of named actions that this action must be run after @raise ValueError: If one of the values is invalid. """ self._beforeList = None self._afterList = None self.beforeList = beforeList self.afterList = afterList def __repr__(self): """ Official string representation for class instance. """ return "ActionDependencies(%s, %s)" % (self.beforeList, self.afterList) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.beforeList != other.beforeList: if self.beforeList < other.beforeList: return -1 else: return 1 if self.afterList != other.afterList: if self.afterList < other.afterList: return -1 else: return 1 return 0 def _setBeforeList(self, value): """ Property target used to set the "run before" list. Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. @raise ValueError: If the value does not match the regular expression. """ if value is None: self._beforeList = None else: try: saved = self._beforeList self._beforeList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._beforeList.extend(value) except Exception, e: self._beforeList = saved raise e def _getBeforeList(self): """ Property target used to get the "run before" list. """ return self._beforeList def _setAfterList(self, value): """ Property target used to set the "run after" list. Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. @raise ValueError: If the value does not match the regular expression. """ if value is None: self._afterList = None else: try: saved = self._afterList self._afterList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._afterList.extend(value) except Exception, e: self._afterList = saved raise e def _getAfterList(self): """ Property target used to get the "run after" list. """ return self._afterList beforeList = property(_getBeforeList, _setBeforeList, None, "List of named actions that this action must be run before.") afterList = property(_getAfterList, _setAfterList, None, "List of named actions that this action must be run after.") ######################################################################## # ActionHook class definition ######################################################################## class ActionHook(object): """ Class representing a hook associated with an action. A hook associated with an action is a shell command to be executed either before or after a named action is executed. The following restrictions exist on data in this class: - The action name must be a non-empty string matching C{ACTION_NAME_REGEX} - The shell command must be a non-empty string. The internal C{before} and C{after} instance variables are always set to False in this parent class. @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after """ def __init__(self, action=None, command=None): """ Constructor for the C{ActionHook} class. @param action: Action this hook is associated with @param command: Shell command to execute @raise ValueError: If one of the values is invalid. """ self._action = None self._command = None self._before = False self._after = False self.action = action self.command = command def __repr__(self): """ Official string representation for class instance. """ return "ActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.action != other.action: if self.action < other.action: return -1 else: return 1 if self.command != other.command: if self.command < other.command: return -1 else: return 1 if self.before != other.before: if self.before < other.before: return -1 else: return 1 if self.after != other.after: if self.after < other.after: return -1 else: return 1 return 0 def _setAction(self, value): """ Property target used to set the action name. The value must be a non-empty string if it is not C{None}. It must also consist only of lower-case letters and digits. @raise ValueError: If the value is an empty string. """ pattern = re.compile(ACTION_NAME_REGEX) if value is not None: if len(value) < 1: raise ValueError("The action name must be a non-empty string.") if not pattern.search(value): raise ValueError("The action name must consist of only lower-case letters and digits.") self._action = value def _getAction(self): """ Property target used to get the action name. """ return self._action def _setCommand(self, value): """ Property target used to set the command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The command must be a non-empty string.") self._command = value def _getCommand(self): """ Property target used to get the command. """ return self._command def _getBefore(self): """ Property target used to get the before flag. """ return self._before def _getAfter(self): """ Property target used to get the after flag. """ return self._after action = property(_getAction, _setAction, None, "Action this hook is associated with.") command = property(_getCommand, _setCommand, None, "Shell command to execute.") before = property(_getBefore, None, None, "Indicates whether command should be executed before action.") after = property(_getAfter, None, None, "Indicates whether command should be executed after action.") class PreActionHook(ActionHook): """ Class representing a pre-action hook associated with an action. A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a pre-action hook is executed before the named action. The following restrictions exist on data in this class: - The action name must be a non-empty string consisting of lower-case letters and digits. - The shell command must be a non-empty string. The internal C{before} instance variable is always set to True in this class. @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after """ def __init__(self, action=None, command=None): """ Constructor for the C{PreActionHook} class. @param action: Action this hook is associated with @param command: Shell command to execute @raise ValueError: If one of the values is invalid. """ ActionHook.__init__(self, action, command) self._before = True def __repr__(self): """ Official string representation for class instance. """ return "PreActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after) class PostActionHook(ActionHook): """ Class representing a pre-action hook associated with an action. A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a post-action hook is executed after the named action. The following restrictions exist on data in this class: - The action name must be a non-empty string consisting of lower-case letters and digits. - The shell command must be a non-empty string. The internal C{before} instance variable is always set to True in this class. @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after """ def __init__(self, action=None, command=None): """ Constructor for the C{PostActionHook} class. @param action: Action this hook is associated with @param command: Shell command to execute @raise ValueError: If one of the values is invalid. """ ActionHook.__init__(self, action, command) self._after = True def __repr__(self): """ Official string representation for class instance. """ return "PostActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after) ######################################################################## # BlankBehavior class definition ######################################################################## class BlankBehavior(object): """ Class representing optimized store-action media blanking behavior. The following restrictions exist on data in this class: - The blanking mode must be a one of the values in L{VALID_BLANK_MODES} - The blanking factor must be a positive floating point number @sort: __init__, __repr__, __str__, __cmp__, blankMode, blankFactor """ def __init__(self, blankMode=None, blankFactor=None): """ Constructor for the C{BlankBehavior} class. @param blankMode: Blanking mode @param blankFactor: Blanking factor @raise ValueError: If one of the values is invalid. """ self._blankMode = None self._blankFactor = None self.blankMode = blankMode self.blankFactor = blankFactor def __repr__(self): """ Official string representation for class instance. """ return "BlankBehavior(%s, %s)" % (self.blankMode, self.blankFactor) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.blankMode != other.blankMode: if self.blankMode < other.blankMode: return -1 else: return 1 if self.blankFactor != other.blankFactor: if self.blankFactor < other.blankFactor: return -1 else: return 1 return 0 def _setBlankMode(self, value): """ Property target used to set the blanking mode. The value must be one of L{VALID_BLANK_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_BLANK_MODES: raise ValueError("Blanking mode must be one of %s." % VALID_BLANK_MODES) self._blankMode = value def _getBlankMode(self): """ Property target used to get the blanking mode. """ return self._blankMode def _setBlankFactor(self, value): """ Property target used to set the blanking factor. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value is not a valid floating point number @raise ValueError: If the value is less than zero """ if value is not None: if len(value) < 1: raise ValueError("Blanking factor must be a non-empty string.") floatValue = float(value) if floatValue < 0.0: raise ValueError("Blanking factor cannot be negative.") self._blankFactor = value # keep around string def _getBlankFactor(self): """ Property target used to get the blanking factor. """ return self._blankFactor blankMode = property(_getBlankMode, _setBlankMode, None, "Blanking mode") blankFactor = property(_getBlankFactor, _setBlankFactor, None, "Blanking factor") ######################################################################## # ExtendedAction class definition ######################################################################## class ExtendedAction(object): """ Class representing an extended action. Essentially, an extended action needs to allow the following to happen:: exec("from %s import %s" % (module, function)) exec("%s(action, configPath")" % function) The following restrictions exist on data in this class: - The action name must be a non-empty string consisting of lower-case letters and digits. - The module must be a non-empty string and a valid Python identifier. - The function must be an on-empty string and a valid Python identifier. - If set, the index must be a positive integer. - If set, the dependencies attribute must be an C{ActionDependencies} object. @sort: __init__, __repr__, __str__, __cmp__, name, module, function, index, dependencies """ def __init__(self, name=None, module=None, function=None, index=None, dependencies=None): """ Constructor for the C{ExtendedAction} class. @param name: Name of the extended action @param module: Name of the module containing the extended action function @param function: Name of the extended action function @param index: Index of action, used for execution ordering @param dependencies: Dependencies for action, used for execution ordering @raise ValueError: If one of the values is invalid. """ self._name = None self._module = None self._function = None self._index = None self._dependencies = None self.name = name self.module = module self.function = function self.index = index self.dependencies = dependencies def __repr__(self): """ Official string representation for class instance. """ return "ExtendedAction(%s, %s, %s, %s, %s)" % (self.name, self.module, self.function, self.index, self.dependencies) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.name != other.name: if self.name < other.name: return -1 else: return 1 if self.module != other.module: if self.module < other.module: return -1 else: return 1 if self.function != other.function: if self.function < other.function: return -1 else: return 1 if self.index != other.index: if self.index < other.index: return -1 else: return 1 if self.dependencies != other.dependencies: if self.dependencies < other.dependencies: return -1 else: return 1 return 0 def _setName(self, value): """ Property target used to set the action name. The value must be a non-empty string if it is not C{None}. It must also consist only of lower-case letters and digits. @raise ValueError: If the value is an empty string. """ pattern = re.compile(ACTION_NAME_REGEX) if value is not None: if len(value) < 1: raise ValueError("The action name must be a non-empty string.") if not pattern.search(value): raise ValueError("The action name must consist of only lower-case letters and digits.") self._name = value def _getName(self): """ Property target used to get the action name. """ return self._name def _setModule(self, value): """ Property target used to set the module name. The value must be a non-empty string if it is not C{None}. It must also be a valid Python identifier. @raise ValueError: If the value is an empty string. """ pattern = re.compile(r"^([A-Za-z_][A-Za-z0-9_]*)(\.[A-Za-z_][A-Za-z0-9_]*)*$") if value is not None: if len(value) < 1: raise ValueError("The module name must be a non-empty string.") if not pattern.search(value): raise ValueError("The module name must be a valid Python identifier.") self._module = value def _getModule(self): """ Property target used to get the module name. """ return self._module def _setFunction(self, value): """ Property target used to set the function name. The value must be a non-empty string if it is not C{None}. It must also be a valid Python identifier. @raise ValueError: If the value is an empty string. """ pattern = re.compile(r"^[A-Za-z_][A-Za-z0-9_]*$") if value is not None: if len(value) < 1: raise ValueError("The function name must be a non-empty string.") if not pattern.search(value): raise ValueError("The function name must be a valid Python identifier.") self._function = value def _getFunction(self): """ Property target used to get the function name. """ return self._function def _setIndex(self, value): """ Property target used to set the action index. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._index = None else: try: value = int(value) except TypeError: raise ValueError("Action index value must be an integer >= 0.") if value < 0: raise ValueError("Action index value must be an integer >= 0.") self._index = value def _getIndex(self): """ Property target used to get the action index. """ return self._index def _setDependencies(self, value): """ Property target used to set the action dependencies information. If not C{None}, the value must be a C{ActionDependecies} object. @raise ValueError: If the value is not a C{ActionDependencies} object. """ if value is None: self._dependencies = None else: if not isinstance(value, ActionDependencies): raise ValueError("Value must be a C{ActionDependencies} object.") self._dependencies = value def _getDependencies(self): """ Property target used to get action dependencies information. """ return self._dependencies name = property(_getName, _setName, None, "Name of the extended action.") module = property(_getModule, _setModule, None, "Name of the module containing the extended action function.") function = property(_getFunction, _setFunction, None, "Name of the extended action function.") index = property(_getIndex, _setIndex, None, "Index of action, used for execution ordering.") dependencies = property(_getDependencies, _setDependencies, None, "Dependencies for action, used for execution ordering.") ######################################################################## # CommandOverride class definition ######################################################################## class CommandOverride(object): """ Class representing a piece of Cedar Backup command override configuration. The following restrictions exist on data in this class: - The absolute path must be absolute @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, command, absolutePath """ def __init__(self, command=None, absolutePath=None): """ Constructor for the C{CommandOverride} class. @param command: Name of command to be overridden. @param absolutePath: Absolute path of the overrridden command. @raise ValueError: If one of the values is invalid. """ self._command = None self._absolutePath = None self.command = command self.absolutePath = absolutePath def __repr__(self): """ Official string representation for class instance. """ return "CommandOverride(%s, %s)" % (self.command, self.absolutePath) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.command != other.command: if self.command < other.command: return -1 else: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 return 0 def _setCommand(self, value): """ Property target used to set the command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The command must be a non-empty string.") self._command = value def _getCommand(self): """ Property target used to get the command. """ return self._command def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Not an absolute path: [%s]" % value) self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath command = property(_getCommand, _setCommand, None, doc="Name of command to be overridden.") absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the overrridden command.") ######################################################################## # CollectFile class definition ######################################################################## class CollectFile(object): """ Class representing a Cedar Backup collect file. The following restrictions exist on data in this class: - Absolute paths must be absolute - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, archiveMode """ def __init__(self, absolutePath=None, collectMode=None, archiveMode=None): """ Constructor for the C{CollectFile} class. @param absolutePath: Absolute path of the file to collect. @param collectMode: Overridden collect mode for this file. @param archiveMode: Overridden archive mode for this file. @raise ValueError: If one of the values is invalid. """ self._absolutePath = None self._collectMode = None self._archiveMode = None self.absolutePath = absolutePath self.collectMode = collectMode self.archiveMode = archiveMode def __repr__(self): """ Official string representation for class instance. """ return "CollectFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.archiveMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.archiveMode != other.archiveMode: if self.archiveMode < other.archiveMode: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Not an absolute path: [%s]" % value) self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setArchiveMode(self, value): """ Property target used to set the archive mode. If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ARCHIVE_MODES: raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) self._archiveMode = value def _getArchiveMode(self): """ Property target used to get the archive mode. """ return self._archiveMode absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the file to collect.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this file.") archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this file.") ######################################################################## # CollectDir class definition ######################################################################## class CollectDir(object): """ Class representing a Cedar Backup collect directory. The following restrictions exist on data in this class: - Absolute paths must be absolute - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. - The ignore file must be a non-empty string. For the C{absoluteExcludePaths} list, validation is accomplished through the L{util.AbsolutePathList} list implementation that overrides common list methods and transparently does the absolute path validation for us. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, absoluteExcludePaths, relativeExcludePaths, excludePatterns """ def __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, linkDepth=None, dereference=False, recursionLevel=None): """ Constructor for the C{CollectDir} class. @param absolutePath: Absolute path of the directory to collect. @param collectMode: Overridden collect mode for this directory. @param archiveMode: Overridden archive mode for this directory. @param ignoreFile: Overidden ignore file name for this directory. @param linkDepth: Maximum at which soft links should be followed. @param dereference: Whether to dereference links that are followed. @param absoluteExcludePaths: List of absolute paths to exclude. @param relativeExcludePaths: List of relative paths to exclude. @param excludePatterns: List of regular expression patterns to exclude. @raise ValueError: If one of the values is invalid. """ self._absolutePath = None self._collectMode = None self._archiveMode = None self._ignoreFile = None self._linkDepth = None self._dereference = None self._recursionLevel = None self._absoluteExcludePaths = None self._relativeExcludePaths = None self._excludePatterns = None self.absolutePath = absolutePath self.collectMode = collectMode self.archiveMode = archiveMode self.ignoreFile = ignoreFile self.linkDepth = linkDepth self.dereference = dereference self.recursionLevel = recursionLevel self.absoluteExcludePaths = absoluteExcludePaths self.relativeExcludePaths = relativeExcludePaths self.excludePatterns = excludePatterns def __repr__(self): """ Official string representation for class instance. """ return "CollectDir(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, self.archiveMode, self.ignoreFile, self.absoluteExcludePaths, self.relativeExcludePaths, self.excludePatterns, self.linkDepth, self.dereference, self.recursionLevel) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.archiveMode != other.archiveMode: if self.archiveMode < other.archiveMode: return -1 else: return 1 if self.ignoreFile != other.ignoreFile: if self.ignoreFile < other.ignoreFile: return -1 else: return 1 if self.linkDepth != other.linkDepth: if self.linkDepth < other.linkDepth: return -1 else: return 1 if self.dereference != other.dereference: if self.dereference < other.dereference: return -1 else: return 1 if self.recursionLevel != other.recursionLevel: if self.recursionLevel < other.recursionLevel: return -1 else: return 1 if self.absoluteExcludePaths != other.absoluteExcludePaths: if self.absoluteExcludePaths < other.absoluteExcludePaths: return -1 else: return 1 if self.relativeExcludePaths != other.relativeExcludePaths: if self.relativeExcludePaths < other.relativeExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Not an absolute path: [%s]" % value) self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setArchiveMode(self, value): """ Property target used to set the archive mode. If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ARCHIVE_MODES: raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) self._archiveMode = value def _getArchiveMode(self): """ Property target used to get the archive mode. """ return self._archiveMode def _setIgnoreFile(self, value): """ Property target used to set the ignore file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The ignore file must be a non-empty string.") self._ignoreFile = value def _getIgnoreFile(self): """ Property target used to get the ignore file. """ return self._ignoreFile def _setLinkDepth(self, value): """ Property target used to set the link depth. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._linkDepth = None else: try: value = int(value) except TypeError: raise ValueError("Link depth value must be an integer >= 0.") if value < 0: raise ValueError("Link depth value must be an integer >= 0.") self._linkDepth = value def _getLinkDepth(self): """ Property target used to get the action linkDepth. """ return self._linkDepth def _setDereference(self, value): """ Property target used to set the dereference flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._dereference = True else: self._dereference = False def _getDereference(self): """ Property target used to get the dereference flag. """ return self._dereference def _setRecursionLevel(self, value): """ Property target used to set the recursionLevel. The value must be an integer. @raise ValueError: If the value is not valid. """ if value is None: self._recursionLevel = None else: try: value = int(value) except TypeError: raise ValueError("Recusion level value must be an integer.") self._recursionLevel = value def _getRecursionLevel(self): """ Property target used to get the action recursionLevel. """ return self._recursionLevel def _setAbsoluteExcludePaths(self, value): """ Property target used to set the absolute exclude paths list. Either the value must be C{None} or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. """ if value is None: self._absoluteExcludePaths = None else: try: saved = self._absoluteExcludePaths self._absoluteExcludePaths = AbsolutePathList() self._absoluteExcludePaths.extend(value) except Exception, e: self._absoluteExcludePaths = saved raise e def _getAbsoluteExcludePaths(self): """ Property target used to get the absolute exclude paths list. """ return self._absoluteExcludePaths def _setRelativeExcludePaths(self, value): """ Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._relativeExcludePaths = None else: try: saved = self._relativeExcludePaths self._relativeExcludePaths = UnorderedList() self._relativeExcludePaths.extend(value) except Exception, e: self._relativeExcludePaths = saved raise e def _getRelativeExcludePaths(self): """ Property target used to get the relative exclude paths list. """ return self._relativeExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception, e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the directory to collect.") collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this directory.") archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this directory.") ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, doc="Overridden ignore file name for this directory.") linkDepth = property(_getLinkDepth, _setLinkDepth, None, doc="Maximum at which soft links should be followed.") dereference = property(_getDereference, _setDereference, None, doc="Whether to dereference links that are followed.") recursionLevel = property(_getRecursionLevel, _setRecursionLevel, None, "Recursion level to use for recursive directory collection") absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.") ######################################################################## # PurgeDir class definition ######################################################################## class PurgeDir(object): """ Class representing a Cedar Backup purge directory. The following restrictions exist on data in this class: - The absolute path must be an absolute path - The retain days value must be an integer >= 0. @sort: __init__, __repr__, __str__, __cmp__, absolutePath, retainDays """ def __init__(self, absolutePath=None, retainDays=None): """ Constructor for the C{PurgeDir} class. @param absolutePath: Absolute path of the directory to be purged. @param retainDays: Number of days content within directory should be retained. @raise ValueError: If one of the values is invalid. """ self._absolutePath = None self._retainDays = None self.absolutePath = absolutePath self.retainDays = retainDays def __repr__(self): """ Official string representation for class instance. """ return "PurgeDir(%s, %s)" % (self.absolutePath, self.retainDays) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.absolutePath != other.absolutePath: if self.absolutePath < other.absolutePath: return -1 else: return 1 if self.retainDays != other.retainDays: if self.retainDays < other.retainDays: return -1 else: return 1 return 0 def _setAbsolutePath(self, value): """ Property target used to set the absolute path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Absolute path must, er, be an absolute path.") self._absolutePath = encodePath(value) def _getAbsolutePath(self): """ Property target used to get the absolute path. """ return self._absolutePath def _setRetainDays(self, value): """ Property target used to set the retain days value. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._retainDays = None else: try: value = int(value) except TypeError: raise ValueError("Retain days value must be an integer >= 0.") if value < 0: raise ValueError("Retain days value must be an integer >= 0.") self._retainDays = value def _getRetainDays(self): """ Property target used to get the absolute path. """ return self._retainDays absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, "Absolute path of directory to purge.") retainDays = property(_getRetainDays, _setRetainDays, None, "Number of days content within directory should be retained.") ######################################################################## # LocalPeer class definition ######################################################################## class LocalPeer(object): """ Class representing a Cedar Backup peer. The following restrictions exist on data in this class: - The peer name must be a non-empty string. - The collect directory must be an absolute path. - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. @sort: __init__, __repr__, __str__, __cmp__, name, collectDir """ def __init__(self, name=None, collectDir=None, ignoreFailureMode=None): """ Constructor for the C{LocalPeer} class. @param name: Name of the peer, typically a valid hostname. @param collectDir: Collect directory to stage files from on peer. @param ignoreFailureMode: Ignore failure mode for peer. @raise ValueError: If one of the values is invalid. """ self._name = None self._collectDir = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.ignoreFailureMode = ignoreFailureMode def __repr__(self): """ Official string representation for class instance. """ return "LocalPeer(%s, %s, %s)" % (self.name, self.collectDir, self.ignoreFailureMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.name != other.name: if self.name < other.name: return -1 else: return 1 if self.collectDir != other.collectDir: if self.collectDir < other.collectDir: return -1 else: return 1 if self.ignoreFailureMode != other.ignoreFailureMode: if self.ignoreFailureMode < other.ignoreFailureMode: return -1 else: return 1 return 0 def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer, typically a valid hostname.") collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ######################################################################## # RemotePeer class definition ######################################################################## class RemotePeer(object): """ Class representing a Cedar Backup peer. The following restrictions exist on data in this class: - The peer name must be a non-empty string. - The collect directory must be an absolute path. - The remote user must be a non-empty string. - The rcp command must be a non-empty string. - The rsh command must be a non-empty string. - The cback command must be a non-empty string. - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. @sort: __init__, __repr__, __str__, __cmp__, name, collectDir, remoteUser, rcpCommand """ def __init__(self, name=None, collectDir=None, remoteUser=None, rcpCommand=None, rshCommand=None, cbackCommand=None, managed=False, managedActions=None, ignoreFailureMode=None): """ Constructor for the C{RemotePeer} class. @param name: Name of the peer, must be a valid hostname. @param collectDir: Collect directory to stage files from on peer. @param remoteUser: Name of backup user on remote peer. @param rcpCommand: Overridden rcp-compatible copy command for peer. @param rshCommand: Overridden rsh-compatible remote shell command for peer. @param cbackCommand: Overridden cback-compatible command to use on remote peer. @param managed: Indicates whether this is a managed peer. @param managedActions: Overridden set of actions that are managed on the peer. @param ignoreFailureMode: Ignore failure mode for peer. @raise ValueError: If one of the values is invalid. """ self._name = None self._collectDir = None self._remoteUser = None self._rcpCommand = None self._rshCommand = None self._cbackCommand = None self._managed = None self._managedActions = None self._ignoreFailureMode = None self.name = name self.collectDir = collectDir self.remoteUser = remoteUser self.rcpCommand = rcpCommand self.rshCommand = rshCommand self.cbackCommand = cbackCommand self.managed = managed self.managedActions = managedActions self.ignoreFailureMode = ignoreFailureMode def __repr__(self): """ Official string representation for class instance. """ return "RemotePeer(%s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.name, self.collectDir, self.remoteUser, self.rcpCommand, self.rshCommand, self.cbackCommand, self.managed, self.managedActions, self.ignoreFailureMode) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.name != other.name: if self.name < other.name: return -1 else: return 1 if self.collectDir != other.collectDir: if self.collectDir < other.collectDir: return -1 else: return 1 if self.remoteUser != other.remoteUser: if self.remoteUser < other.remoteUser: return -1 else: return 1 if self.rcpCommand != other.rcpCommand: if self.rcpCommand < other.rcpCommand: return -1 else: return 1 if self.rshCommand != other.rshCommand: if self.rshCommand < other.rshCommand: return -1 else: return 1 if self.cbackCommand != other.cbackCommand: if self.cbackCommand < other.cbackCommand: return -1 else: return 1 if self.managed != other.managed: if self.managed < other.managed: return -1 else: return 1 if self.managedActions != other.managedActions: if self.managedActions < other.managedActions: return -1 else: return 1 if self.ignoreFailureMode != other.ignoreFailureMode: if self.ignoreFailureMode < other.ignoreFailureMode: return -1 else: return 1 return 0 def _setName(self, value): """ Property target used to set the peer name. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The peer name must be a non-empty string.") self._name = value def _getName(self): """ Property target used to get the peer name. """ return self._name def _setCollectDir(self, value): """ Property target used to set the collect directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Collect directory must be an absolute path.") self._collectDir = encodePath(value) def _getCollectDir(self): """ Property target used to get the collect directory. """ return self._collectDir def _setRemoteUser(self, value): """ Property target used to set the remote user. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The remote user must be a non-empty string.") self._remoteUser = value def _getRemoteUser(self): """ Property target used to get the remote user. """ return self._remoteUser def _setRcpCommand(self, value): """ Property target used to set the rcp command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rcp command must be a non-empty string.") self._rcpCommand = value def _getRcpCommand(self): """ Property target used to get the rcp command. """ return self._rcpCommand def _setRshCommand(self, value): """ Property target used to set the rsh command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rsh command must be a non-empty string.") self._rshCommand = value def _getRshCommand(self): """ Property target used to get the rsh command. """ return self._rshCommand def _setCbackCommand(self, value): """ Property target used to set the cback command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The cback command must be a non-empty string.") self._cbackCommand = value def _getCbackCommand(self): """ Property target used to get the cback command. """ return self._cbackCommand def _setManaged(self, value): """ Property target used to set the managed flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._managed = True else: self._managed = False def _getManaged(self): """ Property target used to get the managed flag. """ return self._managed def _setManagedActions(self, value): """ Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._managedActions = None else: try: saved = self._managedActions self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._managedActions.extend(value) except Exception, e: self._managedActions = saved raise e def _getManagedActions(self): """ Property target used to get the managed actions list. """ return self._managedActions def _setIgnoreFailureMode(self, value): """ Property target used to set the ignoreFailure mode. If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_FAILURE_MODES: raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) self._ignoreFailureMode = value def _getIgnoreFailureMode(self): """ Property target used to get the ignoreFailure mode. """ return self._ignoreFailureMode name = property(_getName, _setName, None, "Name of the peer, must be a valid hostname.") collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of backup user on remote peer.") rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Overridden rcp-compatible copy command for peer.") rshCommand = property(_getRshCommand, _setRshCommand, None, "Overridden rsh-compatible remote shell command for peer.") cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Overridden cback-compatible command to use on remote peer.") managed = property(_getManaged, _setManaged, None, "Indicates whether this is a managed peer.") managedActions = property(_getManagedActions, _setManagedActions, None, "Overridden set of actions that are managed on the peer.") ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") ######################################################################## # ReferenceConfig class definition ######################################################################## class ReferenceConfig(object): """ Class representing a Cedar Backup reference configuration. The reference information is just used for saving off metadata about configuration and exists mostly for backwards-compatibility with Cedar Backup 1.x. @sort: __init__, __repr__, __str__, __cmp__, author, revision, description, generator """ def __init__(self, author=None, revision=None, description=None, generator=None): """ Constructor for the C{ReferenceConfig} class. @param author: Author of the configuration file. @param revision: Revision of the configuration file. @param description: Description of the configuration file. @param generator: Tool that generated the configuration file. """ self._author = None self._revision = None self._description = None self._generator = None self.author = author self.revision = revision self.description = description self.generator = generator def __repr__(self): """ Official string representation for class instance. """ return "ReferenceConfig(%s, %s, %s, %s)" % (self.author, self.revision, self.description, self.generator) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.author != other.author: if self.author < other.author: return -1 else: return 1 if self.revision != other.revision: if self.revision < other.revision: return -1 else: return 1 if self.description != other.description: if self.description < other.description: return -1 else: return 1 if self.generator != other.generator: if self.generator < other.generator: return -1 else: return 1 return 0 def _setAuthor(self, value): """ Property target used to set the author value. No validations. """ self._author = value def _getAuthor(self): """ Property target used to get the author value. """ return self._author def _setRevision(self, value): """ Property target used to set the revision value. No validations. """ self._revision = value def _getRevision(self): """ Property target used to get the revision value. """ return self._revision def _setDescription(self, value): """ Property target used to set the description value. No validations. """ self._description = value def _getDescription(self): """ Property target used to get the description value. """ return self._description def _setGenerator(self, value): """ Property target used to set the generator value. No validations. """ self._generator = value def _getGenerator(self): """ Property target used to get the generator value. """ return self._generator author = property(_getAuthor, _setAuthor, None, "Author of the configuration file.") revision = property(_getRevision, _setRevision, None, "Revision of the configuration file.") description = property(_getDescription, _setDescription, None, "Description of the configuration file.") generator = property(_getGenerator, _setGenerator, None, "Tool that generated the configuration file.") ######################################################################## # ExtensionsConfig class definition ######################################################################## class ExtensionsConfig(object): """ Class representing Cedar Backup extensions configuration. Extensions configuration is used to specify "extended actions" implemented by code external to Cedar Backup. For instance, a hypothetical third party might write extension code to collect database repository data. If they write a properly-formatted extension function, they can use the extension configuration to map a command-line Cedar Backup action (i.e. "database") to their function. The following restrictions exist on data in this class: - If set, the order mode must be one of the values in C{VALID_ORDER_MODES} - The actions list must be a list of C{ExtendedAction} objects. @sort: __init__, __repr__, __str__, __cmp__, orderMode, actions """ def __init__(self, actions=None, orderMode=None): """ Constructor for the C{ExtensionsConfig} class. @param actions: List of extended actions """ self._orderMode = None self._actions = None self.orderMode = orderMode self.actions = actions def __repr__(self): """ Official string representation for class instance. """ return "ExtensionsConfig(%s, %s)" % (self.orderMode, self.actions) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.orderMode != other.orderMode: if self.orderMode < other.orderMode: return -1 else: return 1 if self.actions != other.actions: if self.actions < other.actions: return -1 else: return 1 return 0 def _setOrderMode(self, value): """ Property target used to set the order mode. The value must be one of L{VALID_ORDER_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ORDER_MODES: raise ValueError("Order mode must be one of %s." % VALID_ORDER_MODES) self._orderMode = value def _getOrderMode(self): """ Property target used to get the order mode. """ return self._orderMode def _setActions(self, value): """ Property target used to set the actions list. Either the value must be C{None} or each element must be an C{ExtendedAction}. @raise ValueError: If the value is not a C{ExtendedAction} """ if value is None: self._actions = None else: try: saved = self._actions self._actions = ObjectTypeList(ExtendedAction, "ExtendedAction") self._actions.extend(value) except Exception, e: self._actions = saved raise e def _getActions(self): """ Property target used to get the actions list. """ return self._actions orderMode = property(_getOrderMode, _setOrderMode, None, "Order mode for extensions, to control execution ordering.") actions = property(_getActions, _setActions, None, "List of extended actions.") ######################################################################## # OptionsConfig class definition ######################################################################## class OptionsConfig(object): """ Class representing a Cedar Backup global options configuration. The options section is used to store global configuration options and defaults that can be applied to other sections. The following restrictions exist on data in this class: - The working directory must be an absolute path. - The starting day must be a day of the week in English, i.e. C{"monday"}, C{"tuesday"}, etc. - All of the other values must be non-empty strings if they are set to something other than C{None}. - The overrides list must be a list of C{CommandOverride} objects. - The hooks list must be a list of C{ActionHook} objects. - The cback command must be a non-empty string. - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} @sort: __init__, __repr__, __str__, __cmp__, startingDay, workingDir, backupUser, backupGroup, rcpCommand, rshCommand, overrides """ def __init__(self, startingDay=None, workingDir=None, backupUser=None, backupGroup=None, rcpCommand=None, overrides=None, hooks=None, rshCommand=None, cbackCommand=None, managedActions=None): """ Constructor for the C{OptionsConfig} class. @param startingDay: Day that starts the week. @param workingDir: Working (temporary) directory to use for backups. @param backupUser: Effective user that backups should run as. @param backupGroup: Effective group that backups should run as. @param rcpCommand: Default rcp-compatible copy command for staging. @param rshCommand: Default rsh-compatible command to use for remote shells. @param cbackCommand: Default cback-compatible command to use on managed remote peers. @param overrides: List of configured command path overrides, if any. @param hooks: List of configured pre- and post-action hooks. @param managedActions: Default set of actions that are managed on remote peers. @raise ValueError: If one of the values is invalid. """ self._startingDay = None self._workingDir = None self._backupUser = None self._backupGroup = None self._rcpCommand = None self._rshCommand = None self._cbackCommand = None self._overrides = None self._hooks = None self._managedActions = None self.startingDay = startingDay self.workingDir = workingDir self.backupUser = backupUser self.backupGroup = backupGroup self.rcpCommand = rcpCommand self.rshCommand = rshCommand self.cbackCommand = cbackCommand self.overrides = overrides self.hooks = hooks self.managedActions = managedActions def __repr__(self): """ Official string representation for class instance. """ return "OptionsConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.startingDay, self.workingDir, self.backupUser, self.backupGroup, self.rcpCommand, self.overrides, self.hooks, self.rshCommand, self.cbackCommand, self.managedActions) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.startingDay != other.startingDay: if self.startingDay < other.startingDay: return -1 else: return 1 if self.workingDir != other.workingDir: if self.workingDir < other.workingDir: return -1 else: return 1 if self.backupUser != other.backupUser: if self.backupUser < other.backupUser: return -1 else: return 1 if self.backupGroup != other.backupGroup: if self.backupGroup < other.backupGroup: return -1 else: return 1 if self.rcpCommand != other.rcpCommand: if self.rcpCommand < other.rcpCommand: return -1 else: return 1 if self.rshCommand != other.rshCommand: if self.rshCommand < other.rshCommand: return -1 else: return 1 if self.cbackCommand != other.cbackCommand: if self.cbackCommand < other.cbackCommand: return -1 else: return 1 if self.overrides != other.overrides: if self.overrides < other.overrides: return -1 else: return 1 if self.hooks != other.hooks: if self.hooks < other.hooks: return -1 else: return 1 if self.managedActions != other.managedActions: if self.managedActions < other.managedActions: return -1 else: return 1 return 0 def addOverride(self, command, absolutePath): """ If no override currently exists for the command, add one. @param command: Name of command to be overridden. @param absolutePath: Absolute path of the overrridden command. """ override = CommandOverride(command, absolutePath) if self.overrides is None: self.overrides = [ override, ] else: exists = False for obj in self.overrides: if obj.command == override.command: exists = True break if not exists: self.overrides.append(override) def replaceOverride(self, command, absolutePath): """ If override currently exists for the command, replace it; otherwise add it. @param command: Name of command to be overridden. @param absolutePath: Absolute path of the overrridden command. """ override = CommandOverride(command, absolutePath) if self.overrides is None: self.overrides = [ override, ] else: exists = False for obj in self.overrides: if obj.command == override.command: exists = True obj.absolutePath = override.absolutePath break if not exists: self.overrides.append(override) def _setStartingDay(self, value): """ Property target used to set the starting day. If it is not C{None}, the value must be a valid English day of the week, one of C{"monday"}, C{"tuesday"}, C{"wednesday"}, etc. @raise ValueError: If the value is not a valid day of the week. """ if value is not None: if value not in ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ]: raise ValueError("Starting day must be an English day of the week, i.e. \"monday\".") self._startingDay = value def _getStartingDay(self): """ Property target used to get the starting day. """ return self._startingDay def _setWorkingDir(self, value): """ Property target used to set the working directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Working directory must be an absolute path.") self._workingDir = encodePath(value) def _getWorkingDir(self): """ Property target used to get the working directory. """ return self._workingDir def _setBackupUser(self, value): """ Property target used to set the backup user. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("Backup user must be a non-empty string.") self._backupUser = value def _getBackupUser(self): """ Property target used to get the backup user. """ return self._backupUser def _setBackupGroup(self, value): """ Property target used to set the backup group. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("Backup group must be a non-empty string.") self._backupGroup = value def _getBackupGroup(self): """ Property target used to get the backup group. """ return self._backupGroup def _setRcpCommand(self, value): """ Property target used to set the rcp command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rcp command must be a non-empty string.") self._rcpCommand = value def _getRcpCommand(self): """ Property target used to get the rcp command. """ return self._rcpCommand def _setRshCommand(self, value): """ Property target used to set the rsh command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The rsh command must be a non-empty string.") self._rshCommand = value def _getRshCommand(self): """ Property target used to get the rsh command. """ return self._rshCommand def _setCbackCommand(self, value): """ Property target used to set the cback command. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. """ if value is not None: if len(value) < 1: raise ValueError("The cback command must be a non-empty string.") self._cbackCommand = value def _getCbackCommand(self): """ Property target used to get the cback command. """ return self._cbackCommand def _setOverrides(self, value): """ Property target used to set the command path overrides list. Either the value must be C{None} or each element must be a C{CommandOverride}. @raise ValueError: If the value is not a C{CommandOverride} """ if value is None: self._overrides = None else: try: saved = self._overrides self._overrides = ObjectTypeList(CommandOverride, "CommandOverride") self._overrides.extend(value) except Exception, e: self._overrides = saved raise e def _getOverrides(self): """ Property target used to get the command path overrides list. """ return self._overrides def _setHooks(self, value): """ Property target used to set the pre- and post-action hooks list. Either the value must be C{None} or each element must be an C{ActionHook}. @raise ValueError: If the value is not a C{CommandOverride} """ if value is None: self._hooks = None else: try: saved = self._hooks self._hooks = ObjectTypeList(ActionHook, "ActionHook") self._hooks.extend(value) except Exception, e: self._hooks = saved raise e def _getHooks(self): """ Property target used to get the command path hooks list. """ return self._hooks def _setManagedActions(self, value): """ Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment. """ if value is None: self._managedActions = None else: try: saved = self._managedActions self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") self._managedActions.extend(value) except Exception, e: self._managedActions = saved raise e def _getManagedActions(self): """ Property target used to get the managed actions list. """ return self._managedActions startingDay = property(_getStartingDay, _setStartingDay, None, "Day that starts the week.") workingDir = property(_getWorkingDir, _setWorkingDir, None, "Working (temporary) directory to use for backups.") backupUser = property(_getBackupUser, _setBackupUser, None, "Effective user that backups should run as.") backupGroup = property(_getBackupGroup, _setBackupGroup, None, "Effective group that backups should run as.") rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Default rcp-compatible copy command for staging.") rshCommand = property(_getRshCommand, _setRshCommand, None, "Default rsh-compatible command to use for remote shells.") cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Default cback-compatible command to use on managed remote peers.") overrides = property(_getOverrides, _setOverrides, None, "List of configured command path overrides, if any.") hooks = property(_getHooks, _setHooks, None, "List of configured pre- and post-action hooks.") managedActions = property(_getManagedActions, _setManagedActions, None, "Default set of actions that are managed on remote peers.") ######################################################################## # PeersConfig class definition ######################################################################## class PeersConfig(object): """ Class representing Cedar Backup global peer configuration. This section contains a list of local and remote peers in a master's backup pool. The section is optional. If a master does not define this section, then all peers are unmanaged, and the stage configuration section must explicitly list any peer that is to be staged. If this section is configured, then peers may be managed or unmanaged, and the stage section peer configuration (if any) completely overrides this configuration. The following restrictions exist on data in this class: - The list of local peers must contain only C{LocalPeer} objects - The list of remote peers must contain only C{RemotePeer} objects @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, localPeers, remotePeers """ def __init__(self, localPeers=None, remotePeers=None): """ Constructor for the C{PeersConfig} class. @param localPeers: List of local peers. @param remotePeers: List of remote peers. @raise ValueError: If one of the values is invalid. """ self._localPeers = None self._remotePeers = None self.localPeers = localPeers self.remotePeers = remotePeers def __repr__(self): """ Official string representation for class instance. """ return "PeersConfig(%s, %s)" % (self.localPeers, self.remotePeers) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.localPeers != other.localPeers: if self.localPeers < other.localPeers: return -1 else: return 1 if self.remotePeers != other.remotePeers: if self.remotePeers < other.remotePeers: return -1 else: return 1 return 0 def hasPeers(self): """ Indicates whether any peers are filled into this object. @return: Boolean true if any local or remote peers are filled in, false otherwise. """ return ((self.localPeers is not None and len(self.localPeers) > 0) or (self.remotePeers is not None and len(self.remotePeers) > 0)) def _setLocalPeers(self, value): """ Property target used to set the local peers list. Either the value must be C{None} or each element must be a C{LocalPeer}. @raise ValueError: If the value is not an absolute path. """ if value is None: self._localPeers = None else: try: saved = self._localPeers self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") self._localPeers.extend(value) except Exception, e: self._localPeers = saved raise e def _getLocalPeers(self): """ Property target used to get the local peers list. """ return self._localPeers def _setRemotePeers(self, value): """ Property target used to set the remote peers list. Either the value must be C{None} or each element must be a C{RemotePeer}. @raise ValueError: If the value is not a C{RemotePeer} """ if value is None: self._remotePeers = None else: try: saved = self._remotePeers self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") self._remotePeers.extend(value) except Exception, e: self._remotePeers = saved raise e def _getRemotePeers(self): """ Property target used to get the remote peers list. """ return self._remotePeers localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.") ######################################################################## # CollectConfig class definition ######################################################################## class CollectConfig(object): """ Class representing a Cedar Backup collect configuration. The following restrictions exist on data in this class: - The target directory must be an absolute path. - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. - The ignore file must be a non-empty string. - Each of the paths in C{absoluteExcludePaths} must be an absolute path - The collect file list must be a list of C{CollectFile} objects. - The collect directory list must be a list of C{CollectDir} objects. For the C{absoluteExcludePaths} list, validation is accomplished through the L{util.AbsolutePathList} list implementation that overrides common list methods and transparently does the absolute path validation for us. For the C{collectFiles} and C{collectDirs} list, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element has an appropriate type. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, targetDir, collectMode, archiveMode, ignoreFile, absoluteExcludePaths, excludePatterns, collectFiles, collectDirs """ def __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, collectDirs=None): """ Constructor for the C{CollectConfig} class. @param targetDir: Directory to collect files into. @param collectMode: Default collect mode. @param archiveMode: Default archive mode for collect files. @param ignoreFile: Default ignore file name. @param absoluteExcludePaths: List of absolute paths to exclude. @param excludePatterns: List of regular expression patterns to exclude. @param collectFiles: List of collect files. @param collectDirs: List of collect directories. @raise ValueError: If one of the values is invalid. """ self._targetDir = None self._collectMode = None self._archiveMode = None self._ignoreFile = None self._absoluteExcludePaths = None self._excludePatterns = None self._collectFiles = None self._collectDirs = None self.targetDir = targetDir self.collectMode = collectMode self.archiveMode = archiveMode self.ignoreFile = ignoreFile self.absoluteExcludePaths = absoluteExcludePaths self.excludePatterns = excludePatterns self.collectFiles = collectFiles self.collectDirs = collectDirs def __repr__(self): """ Official string representation for class instance. """ return "CollectConfig(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.targetDir, self.collectMode, self.archiveMode, self.ignoreFile, self.absoluteExcludePaths, self.excludePatterns, self.collectFiles, self.collectDirs) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.targetDir != other.targetDir: if self.targetDir < other.targetDir: return -1 else: return 1 if self.collectMode != other.collectMode: if self.collectMode < other.collectMode: return -1 else: return 1 if self.archiveMode != other.archiveMode: if self.archiveMode < other.archiveMode: return -1 else: return 1 if self.ignoreFile != other.ignoreFile: if self.ignoreFile < other.ignoreFile: return -1 else: return 1 if self.absoluteExcludePaths != other.absoluteExcludePaths: if self.absoluteExcludePaths < other.absoluteExcludePaths: return -1 else: return 1 if self.excludePatterns != other.excludePatterns: if self.excludePatterns < other.excludePatterns: return -1 else: return 1 if self.collectFiles != other.collectFiles: if self.collectFiles < other.collectFiles: return -1 else: return 1 if self.collectDirs != other.collectDirs: if self.collectDirs < other.collectDirs: return -1 else: return 1 return 0 def _setTargetDir(self, value): """ Property target used to set the target directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Target directory must be an absolute path.") self._targetDir = encodePath(value) def _getTargetDir(self): """ Property target used to get the target directory. """ return self._targetDir def _setCollectMode(self, value): """ Property target used to set the collect mode. If not C{None}, the mode must be one of L{VALID_COLLECT_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_COLLECT_MODES: raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) self._collectMode = value def _getCollectMode(self): """ Property target used to get the collect mode. """ return self._collectMode def _setArchiveMode(self, value): """ Property target used to set the archive mode. If not C{None}, the mode must be one of L{VALID_ARCHIVE_MODES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_ARCHIVE_MODES: raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) self._archiveMode = value def _getArchiveMode(self): """ Property target used to get the archive mode. """ return self._archiveMode def _setIgnoreFile(self, value): """ Property target used to set the ignore file. The value must be a non-empty string if it is not C{None}. @raise ValueError: If the value is an empty string. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if len(value) < 1: raise ValueError("The ignore file must be a non-empty string.") self._ignoreFile = encodePath(value) def _getIgnoreFile(self): """ Property target used to get the ignore file. """ return self._ignoreFile def _setAbsoluteExcludePaths(self, value): """ Property target used to set the absolute exclude paths list. Either the value must be C{None} or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. """ if value is None: self._absoluteExcludePaths = None else: try: saved = self._absoluteExcludePaths self._absoluteExcludePaths = AbsolutePathList() self._absoluteExcludePaths.extend(value) except Exception, e: self._absoluteExcludePaths = saved raise e def _getAbsoluteExcludePaths(self): """ Property target used to get the absolute exclude paths list. """ return self._absoluteExcludePaths def _setExcludePatterns(self, value): """ Property target used to set the exclude patterns list. """ if value is None: self._excludePatterns = None else: try: saved = self._excludePatterns self._excludePatterns = RegexList() self._excludePatterns.extend(value) except Exception, e: self._excludePatterns = saved raise e def _getExcludePatterns(self): """ Property target used to get the exclude patterns list. """ return self._excludePatterns def _setCollectFiles(self, value): """ Property target used to set the collect files list. Either the value must be C{None} or each element must be a C{CollectFile}. @raise ValueError: If the value is not a C{CollectFile} """ if value is None: self._collectFiles = None else: try: saved = self._collectFiles self._collectFiles = ObjectTypeList(CollectFile, "CollectFile") self._collectFiles.extend(value) except Exception, e: self._collectFiles = saved raise e def _getCollectFiles(self): """ Property target used to get the collect files list. """ return self._collectFiles def _setCollectDirs(self, value): """ Property target used to set the collect dirs list. Either the value must be C{None} or each element must be a C{CollectDir}. @raise ValueError: If the value is not a C{CollectDir} """ if value is None: self._collectDirs = None else: try: saved = self._collectDirs self._collectDirs = ObjectTypeList(CollectDir, "CollectDir") self._collectDirs.extend(value) except Exception, e: self._collectDirs = saved raise e def _getCollectDirs(self): """ Property target used to get the collect dirs list. """ return self._collectDirs targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to collect files into.") collectMode = property(_getCollectMode, _setCollectMode, None, "Default collect mode.") archiveMode = property(_getArchiveMode, _setArchiveMode, None, "Default archive mode for collect files.") ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Default ignore file name.") absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expressions patterns to exclude.") collectFiles = property(_getCollectFiles, _setCollectFiles, None, "List of collect files.") collectDirs = property(_getCollectDirs, _setCollectDirs, None, "List of collect directories.") ######################################################################## # StageConfig class definition ######################################################################## class StageConfig(object): """ Class representing a Cedar Backup stage configuration. The following restrictions exist on data in this class: - The target directory must be an absolute path - The list of local peers must contain only C{LocalPeer} objects - The list of remote peers must contain only C{RemotePeer} objects @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, targetDir, localPeers, remotePeers """ def __init__(self, targetDir=None, localPeers=None, remotePeers=None): """ Constructor for the C{StageConfig} class. @param targetDir: Directory to stage files into, by peer name. @param localPeers: List of local peers. @param remotePeers: List of remote peers. @raise ValueError: If one of the values is invalid. """ self._targetDir = None self._localPeers = None self._remotePeers = None self.targetDir = targetDir self.localPeers = localPeers self.remotePeers = remotePeers def __repr__(self): """ Official string representation for class instance. """ return "StageConfig(%s, %s, %s)" % (self.targetDir, self.localPeers, self.remotePeers) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.targetDir != other.targetDir: if self.targetDir < other.targetDir: return -1 else: return 1 if self.localPeers != other.localPeers: if self.localPeers < other.localPeers: return -1 else: return 1 if self.remotePeers != other.remotePeers: if self.remotePeers < other.remotePeers: return -1 else: return 1 return 0 def hasPeers(self): """ Indicates whether any peers are filled into this object. @return: Boolean true if any local or remote peers are filled in, false otherwise. """ return ((self.localPeers is not None and len(self.localPeers) > 0) or (self.remotePeers is not None and len(self.remotePeers) > 0)) def _setTargetDir(self, value): """ Property target used to set the target directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Target directory must be an absolute path.") self._targetDir = encodePath(value) def _getTargetDir(self): """ Property target used to get the target directory. """ return self._targetDir def _setLocalPeers(self, value): """ Property target used to set the local peers list. Either the value must be C{None} or each element must be a C{LocalPeer}. @raise ValueError: If the value is not an absolute path. """ if value is None: self._localPeers = None else: try: saved = self._localPeers self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") self._localPeers.extend(value) except Exception, e: self._localPeers = saved raise e def _getLocalPeers(self): """ Property target used to get the local peers list. """ return self._localPeers def _setRemotePeers(self, value): """ Property target used to set the remote peers list. Either the value must be C{None} or each element must be a C{RemotePeer}. @raise ValueError: If the value is not a C{RemotePeer} """ if value is None: self._remotePeers = None else: try: saved = self._remotePeers self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") self._remotePeers.extend(value) except Exception, e: self._remotePeers = saved raise e def _getRemotePeers(self): """ Property target used to get the remote peers list. """ return self._remotePeers targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to stage files into, by peer name.") localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.") ######################################################################## # StoreConfig class definition ######################################################################## class StoreConfig(object): """ Class representing a Cedar Backup store configuration. The following restrictions exist on data in this class: - The source directory must be an absolute path. - The media type must be one of the values in L{VALID_MEDIA_TYPES}. - The device type must be one of the values in L{VALID_DEVICE_TYPES}. - The device path must be an absolute path. - The SCSI id, if provided, must be in the form specified by L{validateScsiId}. - The drive speed must be an integer >= 1 - The blanking behavior must be a C{BlankBehavior} object - The refresh media delay must be an integer >= 0 - The eject delay must be an integer >= 0 Note that although the blanking factor must be a positive floating point number, it is stored as a string. This is done so that we can losslessly go back and forth between XML and object representations of configuration. @sort: __init__, __repr__, __str__, __cmp__, sourceDir, mediaType, deviceType, devicePath, deviceScsiId, driveSpeed, checkData, checkMedia, warnMidnite, noEject, blankBehavior, refreshMediaDelay, ejectDelay """ def __init__(self, sourceDir=None, mediaType=None, deviceType=None, devicePath=None, deviceScsiId=None, driveSpeed=None, checkData=False, warnMidnite=False, noEject=False, checkMedia=False, blankBehavior=None, refreshMediaDelay=None, ejectDelay=None): """ Constructor for the C{StoreConfig} class. @param sourceDir: Directory whose contents should be written to media. @param mediaType: Type of the media (see notes above). @param deviceType: Type of the device (optional, see notes above). @param devicePath: Filesystem device name for writer device, i.e. C{/dev/cdrw}. @param deviceScsiId: SCSI id for writer device, i.e. C{[:]scsibus,target,lun}. @param driveSpeed: Speed of the drive, i.e. C{2} for 2x drive, etc. @param checkData: Whether resulting image should be validated. @param checkMedia: Whether media should be checked before being written to. @param warnMidnite: Whether to generate warnings for crossing midnite. @param noEject: Indicates that the writer device should not be ejected. @param blankBehavior: Controls optimized blanking behavior. @param refreshMediaDelay: Delay, in seconds, to add after refreshing media @param ejectDelay: Delay, in seconds, to add after ejecting media before closing the tray @raise ValueError: If one of the values is invalid. """ self._sourceDir = None self._mediaType = None self._deviceType = None self._devicePath = None self._deviceScsiId = None self._driveSpeed = None self._checkData = None self._checkMedia = None self._warnMidnite = None self._noEject = None self._blankBehavior = None self._refreshMediaDelay = None self._ejectDelay = None self.sourceDir = sourceDir self.mediaType = mediaType self.deviceType = deviceType self.devicePath = devicePath self.deviceScsiId = deviceScsiId self.driveSpeed = driveSpeed self.checkData = checkData self.checkMedia = checkMedia self.warnMidnite = warnMidnite self.noEject = noEject self.blankBehavior = blankBehavior self.refreshMediaDelay = refreshMediaDelay self.ejectDelay = ejectDelay def __repr__(self): """ Official string representation for class instance. """ return "StoreConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % ( self.sourceDir, self.mediaType, self.deviceType, self.devicePath, self.deviceScsiId, self.driveSpeed, self.checkData, self.warnMidnite, self.noEject, self.checkMedia, self.blankBehavior, self.refreshMediaDelay, self.ejectDelay) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.sourceDir != other.sourceDir: if self.sourceDir < other.sourceDir: return -1 else: return 1 if self.mediaType != other.mediaType: if self.mediaType < other.mediaType: return -1 else: return 1 if self.deviceType != other.deviceType: if self.deviceType < other.deviceType: return -1 else: return 1 if self.devicePath != other.devicePath: if self.devicePath < other.devicePath: return -1 else: return 1 if self.deviceScsiId != other.deviceScsiId: if self.deviceScsiId < other.deviceScsiId: return -1 else: return 1 if self.driveSpeed != other.driveSpeed: if self.driveSpeed < other.driveSpeed: return -1 else: return 1 if self.checkData != other.checkData: if self.checkData < other.checkData: return -1 else: return 1 if self.checkMedia != other.checkMedia: if self.checkMedia < other.checkMedia: return -1 else: return 1 if self.warnMidnite != other.warnMidnite: if self.warnMidnite < other.warnMidnite: return -1 else: return 1 if self.noEject != other.noEject: if self.noEject < other.noEject: return -1 else: return 1 if self.blankBehavior != other.blankBehavior: if self.blankBehavior < other.blankBehavior: return -1 else: return 1 if self.refreshMediaDelay != other.refreshMediaDelay: if self.refreshMediaDelay < other.refreshMediaDelay: return -1 else: return 1 if self.ejectDelay != other.ejectDelay: if self.ejectDelay < other.ejectDelay: return -1 else: return 1 return 0 def _setSourceDir(self, value): """ Property target used to set the source directory. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Source directory must be an absolute path.") self._sourceDir = encodePath(value) def _getSourceDir(self): """ Property target used to get the source directory. """ return self._sourceDir def _setMediaType(self, value): """ Property target used to set the media type. The value must be one of L{VALID_MEDIA_TYPES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_MEDIA_TYPES: raise ValueError("Media type must be one of %s." % VALID_MEDIA_TYPES) self._mediaType = value def _getMediaType(self): """ Property target used to get the media type. """ return self._mediaType def _setDeviceType(self, value): """ Property target used to set the device type. The value must be one of L{VALID_DEVICE_TYPES}. @raise ValueError: If the value is not valid. """ if value is not None: if value not in VALID_DEVICE_TYPES: raise ValueError("Device type must be one of %s." % VALID_DEVICE_TYPES) self._deviceType = value def _getDeviceType(self): """ Property target used to get the device type. """ return self._deviceType def _setDevicePath(self, value): """ Property target used to set the device path. The value must be an absolute path if it is not C{None}. It does not have to exist on disk at the time of assignment. @raise ValueError: If the value is not an absolute path. @raise ValueError: If the value cannot be encoded properly. """ if value is not None: if not os.path.isabs(value): raise ValueError("Device path must be an absolute path.") self._devicePath = encodePath(value) def _getDevicePath(self): """ Property target used to get the device path. """ return self._devicePath def _setDeviceScsiId(self, value): """ Property target used to set the SCSI id The SCSI id must be valid per L{validateScsiId}. @raise ValueError: If the value is not valid. """ if value is None: self._deviceScsiId = None else: self._deviceScsiId = validateScsiId(value) def _getDeviceScsiId(self): """ Property target used to get the SCSI id. """ return self._deviceScsiId def _setDriveSpeed(self, value): """ Property target used to set the drive speed. The drive speed must be valid per L{validateDriveSpeed}. @raise ValueError: If the value is not valid. """ self._driveSpeed = validateDriveSpeed(value) def _getDriveSpeed(self): """ Property target used to get the drive speed. """ return self._driveSpeed def _setCheckData(self, value): """ Property target used to set the check data flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._checkData = True else: self._checkData = False def _getCheckData(self): """ Property target used to get the check data flag. """ return self._checkData def _setCheckMedia(self, value): """ Property target used to set the check media flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._checkMedia = True else: self._checkMedia = False def _getCheckMedia(self): """ Property target used to get the check media flag. """ return self._checkMedia def _setWarnMidnite(self, value): """ Property target used to set the midnite warning flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._warnMidnite = True else: self._warnMidnite = False def _getWarnMidnite(self): """ Property target used to get the midnite warning flag. """ return self._warnMidnite def _setNoEject(self, value): """ Property target used to set the no-eject flag. No validations, but we normalize the value to C{True} or C{False}. """ if value: self._noEject = True else: self._noEject = False def _getNoEject(self): """ Property target used to get the no-eject flag. """ return self._noEject def _setBlankBehavior(self, value): """ Property target used to set blanking behavior configuration. If not C{None}, the value must be a C{BlankBehavior} object. @raise ValueError: If the value is not a C{BlankBehavior} """ if value is None: self._blankBehavior = None else: if not isinstance(value, BlankBehavior): raise ValueError("Value must be a C{BlankBehavior} object.") self._blankBehavior = value def _getBlankBehavior(self): """ Property target used to get the blanking behavior configuration. """ return self._blankBehavior def _setRefreshMediaDelay(self, value): """ Property target used to set the refreshMediaDelay. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._refreshMediaDelay = None else: try: value = int(value) except TypeError: raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") if value < 0: raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") if value == 0: value = None # normalize this out, since it's the default self._refreshMediaDelay = value def _getRefreshMediaDelay(self): """ Property target used to get the action refreshMediaDelay. """ return self._refreshMediaDelay def _setEjectDelay(self, value): """ Property target used to set the ejectDelay. The value must be an integer >= 0. @raise ValueError: If the value is not valid. """ if value is None: self._ejectDelay = None else: try: value = int(value) except TypeError: raise ValueError("Action ejectDelay value must be an integer >= 0.") if value < 0: raise ValueError("Action ejectDelay value must be an integer >= 0.") if value == 0: value = None # normalize this out, since it's the default self._ejectDelay = value def _getEjectDelay(self): """ Property target used to get the action ejectDelay. """ return self._ejectDelay sourceDir = property(_getSourceDir, _setSourceDir, None, "Directory whose contents should be written to media.") mediaType = property(_getMediaType, _setMediaType, None, "Type of the media (see notes above).") deviceType = property(_getDeviceType, _setDeviceType, None, "Type of the device (optional, see notes above).") devicePath = property(_getDevicePath, _setDevicePath, None, "Filesystem device name for writer device.") deviceScsiId = property(_getDeviceScsiId, _setDeviceScsiId, None, "SCSI id for writer device (optional, see notes above).") driveSpeed = property(_getDriveSpeed, _setDriveSpeed, None, "Speed of the drive.") checkData = property(_getCheckData, _setCheckData, None, "Whether resulting image should be validated.") checkMedia = property(_getCheckMedia, _setCheckMedia, None, "Whether media should be checked before being written to.") warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") noEject = property(_getNoEject, _setNoEject, None, "Indicates that the writer device should not be ejected.") blankBehavior = property(_getBlankBehavior, _setBlankBehavior, None, "Controls optimized blanking behavior.") refreshMediaDelay = property(_getRefreshMediaDelay, _setRefreshMediaDelay, None, "Delay, in seconds, to add after refreshing media.") ejectDelay = property(_getEjectDelay, _setEjectDelay, None, "Delay, in seconds, to add after ejecting media before closing the tray") ######################################################################## # PurgeConfig class definition ######################################################################## class PurgeConfig(object): """ Class representing a Cedar Backup purge configuration. The following restrictions exist on data in this class: - The purge directory list must be a list of C{PurgeDir} objects. For the C{purgeDirs} list, validation is accomplished through the L{util.ObjectTypeList} list implementation that overrides common list methods and transparently ensures that each element is a C{PurgeDir}. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, purgeDirs """ def __init__(self, purgeDirs=None): """ Constructor for the C{Purge} class. @param purgeDirs: List of purge directories. @raise ValueError: If one of the values is invalid. """ self._purgeDirs = None self.purgeDirs = purgeDirs def __repr__(self): """ Official string representation for class instance. """ return "PurgeConfig(%s)" % self.purgeDirs def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.purgeDirs != other.purgeDirs: if self.purgeDirs < other.purgeDirs: return -1 else: return 1 return 0 def _setPurgeDirs(self, value): """ Property target used to set the purge dirs list. Either the value must be C{None} or each element must be a C{PurgeDir}. @raise ValueError: If the value is not a C{PurgeDir} """ if value is None: self._purgeDirs = None else: try: saved = self._purgeDirs self._purgeDirs = ObjectTypeList(PurgeDir, "PurgeDir") self._purgeDirs.extend(value) except Exception, e: self._purgeDirs = saved raise e def _getPurgeDirs(self): """ Property target used to get the purge dirs list. """ return self._purgeDirs purgeDirs = property(_getPurgeDirs, _setPurgeDirs, None, "List of directories to purge.") ######################################################################## # Config class definition ######################################################################## class Config(object): ###################### # Class documentation ###################### """ Class representing a Cedar Backup XML configuration document. The C{Config} class is a Python object representation of a Cedar Backup XML configuration file. It is intended to be the only Python-language interface to Cedar Backup configuration on disk for both Cedar Backup itself and for external applications. The object representation is two-way: XML data can be used to create a C{Config} object, and then changes to the object can be propogated back to disk. A C{Config} object can even be used to create a configuration file from scratch programmatically. This class and the classes it is composed from often use Python's C{property} construct to validate input and limit access to values. Some validations can only be done once a document is considered "complete" (see module notes for more details). Assignments to the various instance variables must match the expected type, i.e. C{reference} must be a C{ReferenceConfig}. The internal check uses the built-in C{isinstance} function, so it should be OK to use subclasses if you want to. If an instance variable is not set, its value will be C{None}. When an object is initialized without using an XML document, all of the values will be C{None}. Even when an object is initialized using XML, some of the values might be C{None} because not every section is required. @note: Lists within this class are "unordered" for equality comparisons. @sort: __init__, __repr__, __str__, __cmp__, extractXml, validate, reference, extensions, options, collect, stage, store, purge, _getReference, _setReference, _getExtensions, _setExtensions, _getOptions, _setOptions, _getPeers, _setPeers, _getCollect, _setCollect, _getStage, _setStage, _getStore, _setStore, _getPurge, _setPurge """ ############## # Constructor ############## def __init__(self, xmlData=None, xmlPath=None, validate=True): """ Initializes a configuration object. If you initialize the object without passing either C{xmlData} or C{xmlPath}, then configuration will be empty and will be invalid until it is filled in properly. No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded. Unless the C{validate} argument is C{False}, the L{Config.validate} method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if C{validate} is C{False}, it might not be possible to parse the passed-in XML document if lower-level validations fail. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to read in invalid configuration from disk. @param xmlData: XML data representing configuration. @type xmlData: String data. @param xmlPath: Path to an XML file on disk. @type xmlPath: Absolute path to a file on disk. @param validate: Validate the document after parsing it. @type validate: Boolean true/false. @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. @raise ValueError: If the parsed configuration document is not valid. """ self._reference = None self._extensions = None self._options = None self._peers = None self._collect = None self._stage = None self._store = None self._purge = None self.reference = None self.extensions = None self.options = None self.peers = None self.collect = None self.stage = None self.store = None self.purge = None if xmlData is not None and xmlPath is not None: raise ValueError("Use either xmlData or xmlPath, but not both.") if xmlData is not None: self._parseXmlData(xmlData) if validate: self.validate() elif xmlPath is not None: xmlData = open(xmlPath).read() self._parseXmlData(xmlData) if validate: self.validate() ######################### # String representations ######################### def __repr__(self): """ Official string representation for class instance. """ return "Config(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.reference, self.extensions, self.options, self.peers, self.collect, self.stage, self.store, self.purge) def __str__(self): """ Informal string representation for class instance. """ return self.__repr__() ############################# # Standard comparison method ############################# def __cmp__(self, other): """ Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons. @param other: Other object to compare to. @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. """ if other is None: return 1 if self.reference != other.reference: if self.reference < other.reference: return -1 else: return 1 if self.extensions != other.extensions: if self.extensions < other.extensions: return -1 else: return 1 if self.options != other.options: if self.options < other.options: return -1 else: return 1 if self.peers != other.peers: if self.peers < other.peers: return -1 else: return 1 if self.collect != other.collect: if self.collect < other.collect: return -1 else: return 1 if self.stage != other.stage: if self.stage < other.stage: return -1 else: return 1 if self.store != other.store: if self.store < other.store: return -1 else: return 1 if self.purge != other.purge: if self.purge < other.purge: return -1 else: return 1 return 0 ############# # Properties ############# def _setReference(self, value): """ Property target used to set the reference configuration value. If not C{None}, the value must be a C{ReferenceConfig} object. @raise ValueError: If the value is not a C{ReferenceConfig} """ if value is None: self._reference = None else: if not isinstance(value, ReferenceConfig): raise ValueError("Value must be a C{ReferenceConfig} object.") self._reference = value def _getReference(self): """ Property target used to get the reference configuration value. """ return self._reference def _setExtensions(self, value): """ Property target used to set the extensions configuration value. If not C{None}, the value must be a C{ExtensionsConfig} object. @raise ValueError: If the value is not a C{ExtensionsConfig} """ if value is None: self._extensions = None else: if not isinstance(value, ExtensionsConfig): raise ValueError("Value must be a C{ExtensionsConfig} object.") self._extensions = value def _getExtensions(self): """ Property target used to get the extensions configuration value. """ return self._extensions def _setOptions(self, value): """ Property target used to set the options configuration value. If not C{None}, the value must be an C{OptionsConfig} object. @raise ValueError: If the value is not a C{OptionsConfig} """ if value is None: self._options = None else: if not isinstance(value, OptionsConfig): raise ValueError("Value must be a C{OptionsConfig} object.") self._options = value def _getOptions(self): """ Property target used to get the options configuration value. """ return self._options def _setPeers(self, value): """ Property target used to set the peers configuration value. If not C{None}, the value must be an C{PeersConfig} object. @raise ValueError: If the value is not a C{PeersConfig} """ if value is None: self._peers = None else: if not isinstance(value, PeersConfig): raise ValueError("Value must be a C{PeersConfig} object.") self._peers = value def _getPeers(self): """ Property target used to get the peers configuration value. """ return self._peers def _setCollect(self, value): """ Property target used to set the collect configuration value. If not C{None}, the value must be a C{CollectConfig} object. @raise ValueError: If the value is not a C{CollectConfig} """ if value is None: self._collect = None else: if not isinstance(value, CollectConfig): raise ValueError("Value must be a C{CollectConfig} object.") self._collect = value def _getCollect(self): """ Property target used to get the collect configuration value. """ return self._collect def _setStage(self, value): """ Property target used to set the stage configuration value. If not C{None}, the value must be a C{StageConfig} object. @raise ValueError: If the value is not a C{StageConfig} """ if value is None: self._stage = None else: if not isinstance(value, StageConfig): raise ValueError("Value must be a C{StageConfig} object.") self._stage = value def _getStage(self): """ Property target used to get the stage configuration value. """ return self._stage def _setStore(self, value): """ Property target used to set the store configuration value. If not C{None}, the value must be a C{StoreConfig} object. @raise ValueError: If the value is not a C{StoreConfig} """ if value is None: self._store = None else: if not isinstance(value, StoreConfig): raise ValueError("Value must be a C{StoreConfig} object.") self._store = value def _getStore(self): """ Property target used to get the store configuration value. """ return self._store def _setPurge(self, value): """ Property target used to set the purge configuration value. If not C{None}, the value must be a C{PurgeConfig} object. @raise ValueError: If the value is not a C{PurgeConfig} """ if value is None: self._purge = None else: if not isinstance(value, PurgeConfig): raise ValueError("Value must be a C{PurgeConfig} object.") self._purge = value def _getPurge(self): """ Property target used to get the purge configuration value. """ return self._purge reference = property(_getReference, _setReference, None, "Reference configuration in terms of a C{ReferenceConfig} object.") extensions = property(_getExtensions, _setExtensions, None, "Extensions configuration in terms of a C{ExtensionsConfig} object.") options = property(_getOptions, _setOptions, None, "Options configuration in terms of a C{OptionsConfig} object.") peers = property(_getPeers, _setPeers, None, "Peers configuration in terms of a C{PeersConfig} object.") collect = property(_getCollect, _setCollect, None, "Collect configuration in terms of a C{CollectConfig} object.") stage = property(_getStage, _setStage, None, "Stage configuration in terms of a C{StageConfig} object.") store = property(_getStore, _setStore, None, "Store configuration in terms of a C{StoreConfig} object.") purge = property(_getPurge, _setPurge, None, "Purge configuration in terms of a C{PurgeConfig} object.") ################# # Public methods ################# def extractXml(self, xmlPath=None, validate=True): """ Extracts configuration into an XML document. If C{xmlPath} is not provided, then the XML document will be returned as a string. If C{xmlPath} is provided, then the XML document will be written to the file and C{None} will be returned. Unless the C{validate} parameter is C{False}, the L{Config.validate} method will be called (with its default arguments) against the configuration before extracting the XML. If configuration is not valid, then an XML document will not be extracted. @note: It is strongly suggested that the C{validate} option always be set to C{True} (the default) unless there is a specific need to write an invalid configuration file to disk. @param xmlPath: Path to an XML file to create on disk. @type xmlPath: Absolute path to a file. @param validate: Validate the document before extracting it. @type validate: Boolean true/false. @return: XML string data or C{None} as described above. @raise ValueError: If configuration within the object is not valid. @raise IOError: If there is an error writing to the file. @raise OSError: If there is an error writing to the file. """ if validate: self.validate() xmlData = self._extractXml() if xmlPath is not None: open(xmlPath, "w").write(xmlData) return None else: return xmlData def validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False): """ Validates configuration represented by the object. This method encapsulates all of the validations that should apply to a fully "complete" document but are not already taken care of by earlier validations. It also provides some extra convenience functionality which might be useful to some people. The process of validation is laid out in the I{Validation} section in the class notes (above). @param requireOneAction: Require at least one of the collect, stage, store or purge sections. @param requireReference: Require the reference section. @param requireExtensions: Require the extensions section. @param requireOptions: Require the options section. @param requirePeers: Require the peers section. @param requireCollect: Require the collect section. @param requireStage: Require the stage section. @param requireStore: Require the store section. @param requirePurge: Require the purge section. @raise ValueError: If one of the validations fails. """ if requireOneAction and (self.collect, self.stage, self.store, self.purge) == (None, None, None, None): raise ValueError("At least one of the collect, stage, store and purge sections is required.") if requireReference and self.reference is None: raise ValueError("The reference is section is required.") if requireExtensions and self.extensions is None: raise ValueError("The extensions is section is required.") if requireOptions and self.options is None: raise ValueError("The options is section is required.") if requirePeers and self.peers is None: raise ValueError("The peers is section is required.") if requireCollect and self.collect is None: raise ValueError("The collect is section is required.") if requireStage and self.stage is None: raise ValueError("The stage is section is required.") if requireStore and self.store is None: raise ValueError("The store is section is required.") if requirePurge and self.purge is None: raise ValueError("The purge is section is required.") self._validateContents() ##################################### # High-level methods for parsing XML ##################################### def _parseXmlData(self, xmlData): """ Internal method to parse an XML string into the object. This method parses the XML document into a DOM tree (C{xmlDom}) and then calls individual static methods to parse each of the individual configuration sections. Most of the validation we do here has to do with whether the document can be parsed and whether any values which exist are valid. We don't do much validation as to whether required elements actually exist unless we have to to make sense of the document (instead, that's the job of the L{validate} method). @param xmlData: XML data to be parsed @type xmlData: String data @raise ValueError: If the XML cannot be successfully parsed. """ (xmlDom, parentNode) = createInputDom(xmlData) self._reference = Config._parseReference(parentNode) self._extensions = Config._parseExtensions(parentNode) self._options = Config._parseOptions(parentNode) self._peers = Config._parsePeers(parentNode) self._collect = Config._parseCollect(parentNode) self._stage = Config._parseStage(parentNode) self._store = Config._parseStore(parentNode) self._purge = Config._parsePurge(parentNode) @staticmethod def _parseReference(parentNode): """ Parses a reference configuration section. We read the following fields:: author //cb_config/reference/author revision //cb_config/reference/revision description //cb_config/reference/description generator //cb_config/reference/generator @param parentNode: Parent node to search beneath. @return: C{ReferenceConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ reference = None sectionNode = readFirstChild(parentNode, "reference") if sectionNode is not None: reference = ReferenceConfig() reference.author = readString(sectionNode, "author") reference.revision = readString(sectionNode, "revision") reference.description = readString(sectionNode, "description") reference.generator = readString(sectionNode, "generator") return reference @staticmethod def _parseExtensions(parentNode): """ Parses an extensions configuration section. We read the following fields:: orderMode //cb_config/extensions/order_mode We also read groups of the following items, one list element per item:: name //cb_config/extensions/action/name module //cb_config/extensions/action/module function //cb_config/extensions/action/function index //cb_config/extensions/action/index dependencies //cb_config/extensions/action/depends The extended actions are parsed by L{_parseExtendedActions}. @param parentNode: Parent node to search beneath. @return: C{ExtensionsConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ extensions = None sectionNode = readFirstChild(parentNode, "extensions") if sectionNode is not None: extensions = ExtensionsConfig() extensions.orderMode = readString(sectionNode, "order_mode") extensions.actions = Config._parseExtendedActions(sectionNode) return extensions @staticmethod def _parseOptions(parentNode): """ Parses a options configuration section. We read the following fields:: startingDay //cb_config/options/starting_day workingDir //cb_config/options/working_dir backupUser //cb_config/options/backup_user backupGroup //cb_config/options/backup_group rcpCommand //cb_config/options/rcp_command rshCommand //cb_config/options/rsh_command cbackCommand //cb_config/options/cback_command managedActions //cb_config/options/managed_actions The list of managed actions is a comma-separated list of action names. We also read groups of the following items, one list element per item:: overrides //cb_config/options/override hooks //cb_config/options/hook The overrides are parsed by L{_parseOverrides} and the hooks are parsed by L{_parseHooks}. @param parentNode: Parent node to search beneath. @return: C{OptionsConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ options = None sectionNode = readFirstChild(parentNode, "options") if sectionNode is not None: options = OptionsConfig() options.startingDay = readString(sectionNode, "starting_day") options.workingDir = readString(sectionNode, "working_dir") options.backupUser = readString(sectionNode, "backup_user") options.backupGroup = readString(sectionNode, "backup_group") options.rcpCommand = readString(sectionNode, "rcp_command") options.rshCommand = readString(sectionNode, "rsh_command") options.cbackCommand = readString(sectionNode, "cback_command") options.overrides = Config._parseOverrides(sectionNode) options.hooks = Config._parseHooks(sectionNode) managedActions = readString(sectionNode, "managed_actions") options.managedActions = parseCommaSeparatedString(managedActions) return options @staticmethod def _parsePeers(parentNode): """ Parses a peers configuration section. We read groups of the following items, one list element per item:: localPeers //cb_config/stage/peer remotePeers //cb_config/stage/peer The individual peer entries are parsed by L{_parsePeerList}. @param parentNode: Parent node to search beneath. @return: C{StageConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ peers = None sectionNode = readFirstChild(parentNode, "peers") if sectionNode is not None: peers = PeersConfig() (peers.localPeers, peers.remotePeers) = Config._parsePeerList(sectionNode) return peers @staticmethod def _parseCollect(parentNode): """ Parses a collect configuration section. We read the following individual fields:: targetDir //cb_config/collect/collect_dir collectMode //cb_config/collect/collect_mode archiveMode //cb_config/collect/archive_mode ignoreFile //cb_config/collect/ignore_file We also read groups of the following items, one list element per item:: absoluteExcludePaths //cb_config/collect/exclude/abs_path excludePatterns //cb_config/collect/exclude/pattern collectFiles //cb_config/collect/file collectDirs //cb_config/collect/dir The exclusions are parsed by L{_parseExclusions}, the collect files are parsed by L{_parseCollectFiles}, and the directories are parsed by L{_parseCollectDirs}. @param parentNode: Parent node to search beneath. @return: C{CollectConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ collect = None sectionNode = readFirstChild(parentNode, "collect") if sectionNode is not None: collect = CollectConfig() collect.targetDir = readString(sectionNode, "collect_dir") collect.collectMode = readString(sectionNode, "collect_mode") collect.archiveMode = readString(sectionNode, "archive_mode") collect.ignoreFile = readString(sectionNode, "ignore_file") (collect.absoluteExcludePaths, unused, collect.excludePatterns) = Config._parseExclusions(sectionNode) collect.collectFiles = Config._parseCollectFiles(sectionNode) collect.collectDirs = Config._parseCollectDirs(sectionNode) return collect @staticmethod def _parseStage(parentNode): """ Parses a stage configuration section. We read the following individual fields:: targetDir //cb_config/stage/staging_dir We also read groups of the following items, one list element per item:: localPeers //cb_config/stage/peer remotePeers //cb_config/stage/peer The individual peer entries are parsed by L{_parsePeerList}. @param parentNode: Parent node to search beneath. @return: C{StageConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ stage = None sectionNode = readFirstChild(parentNode, "stage") if sectionNode is not None: stage = StageConfig() stage.targetDir = readString(sectionNode, "staging_dir") (stage.localPeers, stage.remotePeers) = Config._parsePeerList(sectionNode) return stage @staticmethod def _parseStore(parentNode): """ Parses a store configuration section. We read the following fields:: sourceDir //cb_config/store/source_dir mediaType //cb_config/store/media_type deviceType //cb_config/store/device_type devicePath //cb_config/store/target_device deviceScsiId //cb_config/store/target_scsi_id driveSpeed //cb_config/store/drive_speed checkData //cb_config/store/check_data checkMedia //cb_config/store/check_media warnMidnite //cb_config/store/warn_midnite noEject //cb_config/store/no_eject Blanking behavior configuration is parsed by the C{_parseBlankBehavior} method. @param parentNode: Parent node to search beneath. @return: C{StoreConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ store = None sectionNode = readFirstChild(parentNode, "store") if sectionNode is not None: store = StoreConfig() store.sourceDir = readString(sectionNode, "source_dir") store.mediaType = readString(sectionNode, "media_type") store.deviceType = readString(sectionNode, "device_type") store.devicePath = readString(sectionNode, "target_device") store.deviceScsiId = readString(sectionNode, "target_scsi_id") store.driveSpeed = readInteger(sectionNode, "drive_speed") store.checkData = readBoolean(sectionNode, "check_data") store.checkMedia = readBoolean(sectionNode, "check_media") store.warnMidnite = readBoolean(sectionNode, "warn_midnite") store.noEject = readBoolean(sectionNode, "no_eject") store.blankBehavior = Config._parseBlankBehavior(sectionNode) store.refreshMediaDelay = readInteger(sectionNode, "refresh_media_delay") store.ejectDelay = readInteger(sectionNode, "eject_delay") return store @staticmethod def _parsePurge(parentNode): """ Parses a purge configuration section. We read groups of the following items, one list element per item:: purgeDirs //cb_config/purge/dir The individual directory entries are parsed by L{_parsePurgeDirs}. @param parentNode: Parent node to search beneath. @return: C{PurgeConfig} object or C{None} if the section does not exist. @raise ValueError: If some filled-in value is invalid. """ purge = None sectionNode = readFirstChild(parentNode, "purge") if sectionNode is not None: purge = PurgeConfig() purge.purgeDirs = Config._parsePurgeDirs(sectionNode) return purge @staticmethod def _parseExtendedActions(parentNode): """ Reads extended actions data from immediately beneath the parent. We read the following individual fields from each extended action:: name name module module function function index index dependencies depends Dependency information is parsed by the C{_parseDependencies} method. @param parentNode: Parent node to search beneath. @return: List of extended actions. @raise ValueError: If the data at the location can't be read """ lst = [] for entry in readChildren(parentNode, "action"): if isElement(entry): action = ExtendedAction() action.name = readString(entry, "name") action.module = readString(entry, "module") action.function = readString(entry, "function") action.index = readInteger(entry, "index") action.dependencies = Config._parseDependencies(entry) lst.append(action) if lst == []: lst = None return lst @staticmethod def _parseExclusions(parentNode): """ Reads exclusions data from immediately beneath the parent. We read groups of the following items, one list element per item:: absolute exclude/abs_path relative exclude/rel_path patterns exclude/pattern If there are none of some pattern (i.e. no relative path items) then C{None} will be returned for that item in the tuple. This method can be used to parse exclusions on both the collect configuration level and on the collect directory level within collect configuration. @param parentNode: Parent node to search beneath. @return: Tuple of (absolute, relative, patterns) exclusions. """ sectionNode = readFirstChild(parentNode, "exclude") if sectionNode is None: return (None, None, None) else: absolute = readStringList(sectionNode, "abs_path") relative = readStringList(sectionNode, "rel_path") patterns = readStringList(sectionNode, "pattern") return (absolute, relative, patterns) @staticmethod def _parseOverrides(parentNode): """ Reads a list of C{CommandOverride} objects from immediately beneath the parent. We read the following individual fields:: command command absolutePath abs_path @param parentNode: Parent node to search beneath. @return: List of C{CommandOverride} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "override"): if isElement(entry): override = CommandOverride() override.command = readString(entry, "command") override.absolutePath = readString(entry, "abs_path") lst.append(override) if lst == []: lst = None return lst @staticmethod # pylint: disable=R0204 def _parseHooks(parentNode): """ Reads a list of C{ActionHook} objects from immediately beneath the parent. We read the following individual fields:: action action command command @param parentNode: Parent node to search beneath. @return: List of C{ActionHook} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "pre_action_hook"): if isElement(entry): hook = PreActionHook() hook.action = readString(entry, "action") hook.command = readString(entry, "command") lst.append(hook) for entry in readChildren(parentNode, "post_action_hook"): if isElement(entry): hook = PostActionHook() hook.action = readString(entry, "action") hook.command = readString(entry, "command") lst.append(hook) if lst == []: lst = None return lst @staticmethod def _parseCollectFiles(parentNode): """ Reads a list of C{CollectFile} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode mode I{or} collect_mode archiveMode archive_mode The collect mode is a special case. Just a C{mode} tag is accepted, but we prefer C{collect_mode} for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only C{mode} will be used. @param parentNode: Parent node to search beneath. @return: List of C{CollectFile} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "file"): if isElement(entry): cfile = CollectFile() cfile.absolutePath = readString(entry, "abs_path") cfile.collectMode = readString(entry, "mode") if cfile.collectMode is None: cfile.collectMode = readString(entry, "collect_mode") cfile.archiveMode = readString(entry, "archive_mode") lst.append(cfile) if lst == []: lst = None return lst @staticmethod def _parseCollectDirs(parentNode): """ Reads a list of C{CollectDir} objects from immediately beneath the parent. We read the following individual fields:: absolutePath abs_path collectMode mode I{or} collect_mode archiveMode archive_mode ignoreFile ignore_file linkDepth link_depth dereference dereference recursionLevel recursion_level The collect mode is a special case. Just a C{mode} tag is accepted for backwards compatibility, but we prefer C{collect_mode} for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only C{mode} will be used. We also read groups of the following items, one list element per item:: absoluteExcludePaths exclude/abs_path relativeExcludePaths exclude/rel_path excludePatterns exclude/pattern The exclusions are parsed by L{_parseExclusions}. @param parentNode: Parent node to search beneath. @return: List of C{CollectDir} objects or C{None} if none are found. @raise ValueError: If some filled-in value is invalid. """ lst = [] for entry in readChildren(parentNode, "dir"): if isElement(entry): cdir = CollectDir() cdir.absolutePath = readString(entry, "abs_path") cdir.collectMode = readString(entry, "mode") if cdir.collectMode is None: cdir.collectMode = readString(entry, "collect_mode") cdir.archiveMode = readString(entry, "archive_mode") cdir.ignoreFile = readString(entry, "ignore_file") cdir.linkDepth = readInteger(entry, "link_depth") cdir.dereference = readBoolean(entry, "dereference") cdir.recursionLevel = readInteger(entry, "recursion_level") (cdir.absoluteExcludePaths, cdir.relativeExcludePaths, cdir.excludePatterns) = Config._parseExclusions(entry) lst.append(cdir) if lst == []: lst = None return lst @staticmethod def _parsePurgeDirs(parentNode): """ Reads a list of C{PurgeDir} objects from immediately beneath the parent. We read the following individual fields:: absolutePath /abs_path retainDays /retain_days @param parentNode: Parent node to search beneath. @return: List of C{PurgeDir} objects or C{None} if none are found. @raise ValueError: If the data at the location can't be read """ lst = [] for entry in readChildren(parentNode, "dir"): if isElement(entry): cdir = PurgeDir() cdir.absolutePath = readString(entry, "abs_path") cdir.retainDays = readInteger(entry, "retain_days") lst.append(cdir) if lst == []: lst = None return lst @staticmethod def _parsePeerList(parentNode): """ Reads remote and local peer data from immediately beneath the parent. We read the following individual fields for both remote and local peers:: name name collectDir collect_dir We also read the following individual fields for remote peers only:: remoteUser backup_user rcpCommand rcp_command rshCommand rsh_command cbackCommand cback_command managed managed managedActions managed_actions Additionally, the value in the C{type} field is used to determine whether this entry is a remote peer. If the type is C{"remote"}, it's a remote peer, and if the type is C{"local"}, it's a remote peer. If there are none of one type of peer (i.e. no local peers) then C{None} will be returned for that item in the tuple. @param parentNode: Parent node to search beneath. @return: Tuple of (local, remote) peer lists. @raise ValueError: If the data at the location can't be read """ localPeers = [] remotePeers = [] for entry in readChildren(parentNode, "peer"): if isElement(entry): peerType = readString(entry, "type") if peerType == "local": localPeer = LocalPeer() localPeer.name = readString(entry, "name") localPeer.collectDir = readString(entry, "collect_dir") localPeer.ignoreFailureMode = readString(entry, "ignore_failures") localPeers.append(localPeer) elif peerType == "remote": remotePeer = RemotePeer() remotePeer.name = readString(entry, "name") remotePeer.collectDir = readString(entry, "collect_dir") remotePeer.remoteUser = readString(entry, "backup_user") remotePeer.rcpCommand = readString(entry, "rcp_command") remotePeer.rshCommand = readString(entry, "rsh_command") remotePeer.cbackCommand = readString(entry, "cback_command") remotePeer.ignoreFailureMode = readString(entry, "ignore_failures") remotePeer.managed = readBoolean(entry, "managed") managedActions = readString(entry, "managed_actions") remotePeer.managedActions = parseCommaSeparatedString(managedActions) remotePeers.append(remotePeer) if localPeers == []: localPeers = None if remotePeers == []: remotePeers = None return (localPeers, remotePeers) @staticmethod def _parseDependencies(parentNode): """ Reads extended action dependency information from a parent node. We read the following individual fields:: runBefore depends/run_before runAfter depends/run_after Each of these fields is a comma-separated list of action names. The result is placed into an C{ActionDependencies} object. If the dependencies parent node does not exist, C{None} will be returned. Otherwise, an C{ActionDependencies} object will always be created, even if it does not contain any actual dependencies in it. @param parentNode: Parent node to search beneath. @return: C{ActionDependencies} object or C{None}. @raise ValueError: If the data at the location can't be read """ sectionNode = readFirstChild(parentNode, "depends") if sectionNode is None: return None else: runBefore = readString(sectionNode, "run_before") runAfter = readString(sectionNode, "run_after") beforeList = parseCommaSeparatedString(runBefore) afterList = parseCommaSeparatedString(runAfter) return ActionDependencies(beforeList, afterList) @staticmethod def _parseBlankBehavior(parentNode): """ Reads a single C{BlankBehavior} object from immediately beneath the parent. We read the following individual fields:: blankMode blank_behavior/mode blankFactor blank_behavior/factor @param parentNode: Parent node to search beneath. @return: C{BlankBehavior} object or C{None} if none if the section is not found @raise ValueError: If some filled-in value is invalid. """ blankBehavior = None sectionNode = readFirstChild(parentNode, "blank_behavior") if sectionNode is not None: blankBehavior = BlankBehavior() blankBehavior.blankMode = readString(sectionNode, "mode") blankBehavior.blankFactor = readString(sectionNode, "factor") return blankBehavior ######################################## # High-level methods for generating XML ######################################## def _extractXml(self): """ Internal method to extract configuration into an XML string. This method assumes that the internal L{validate} method has been called prior to extracting the XML, if the caller cares. No validation will be done internally. As a general rule, fields that are set to C{None} will be extracted into the document as empty tags. The same goes for container tags that are filled based on lists - if the list is empty or C{None}, the container tag will be empty. """ (xmlDom, parentNode) = createOutputDom() Config._addReference(xmlDom, parentNode, self.reference) Config._addExtensions(xmlDom, parentNode, self.extensions) Config._addOptions(xmlDom, parentNode, self.options) Config._addPeers(xmlDom, parentNode, self.peers) Config._addCollect(xmlDom, parentNode, self.collect) Config._addStage(xmlDom, parentNode, self.stage) Config._addStore(xmlDom, parentNode, self.store) Config._addPurge(xmlDom, parentNode, self.purge) xmlData = serializeDom(xmlDom) xmlDom.unlink() return xmlData @staticmethod def _addReference(xmlDom, parentNode, referenceConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: author //cb_config/reference/author revision //cb_config/reference/revision description //cb_config/reference/description generator //cb_config/reference/generator If C{referenceConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param referenceConfig: Reference configuration section to be added to the document. """ if referenceConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "reference") addStringNode(xmlDom, sectionNode, "author", referenceConfig.author) addStringNode(xmlDom, sectionNode, "revision", referenceConfig.revision) addStringNode(xmlDom, sectionNode, "description", referenceConfig.description) addStringNode(xmlDom, sectionNode, "generator", referenceConfig.generator) @staticmethod def _addExtensions(xmlDom, parentNode, extensionsConfig): """ Adds an configuration section as the next child of a parent. We add the following fields to the document:: order_mode //cb_config/extensions/order_mode We also add groups of the following items, one list element per item:: actions //cb_config/extensions/action The extended action entries are added by L{_addExtendedAction}. If C{extensionsConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param extensionsConfig: Extensions configuration section to be added to the document. """ if extensionsConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "extensions") addStringNode(xmlDom, sectionNode, "order_mode", extensionsConfig.orderMode) if extensionsConfig.actions is not None: for action in extensionsConfig.actions: Config._addExtendedAction(xmlDom, sectionNode, action) @staticmethod def _addOptions(xmlDom, parentNode, optionsConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: startingDay //cb_config/options/starting_day workingDir //cb_config/options/working_dir backupUser //cb_config/options/backup_user backupGroup //cb_config/options/backup_group rcpCommand //cb_config/options/rcp_command rshCommand //cb_config/options/rsh_command cbackCommand //cb_config/options/cback_command managedActions //cb_config/options/managed_actions We also add groups of the following items, one list element per item:: overrides //cb_config/options/override hooks //cb_config/options/pre_action_hook hooks //cb_config/options/post_action_hook The individual override items are added by L{_addOverride}. The individual hook items are added by L{_addHook}. If C{optionsConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param optionsConfig: Options configuration section to be added to the document. """ if optionsConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "options") addStringNode(xmlDom, sectionNode, "starting_day", optionsConfig.startingDay) addStringNode(xmlDom, sectionNode, "working_dir", optionsConfig.workingDir) addStringNode(xmlDom, sectionNode, "backup_user", optionsConfig.backupUser) addStringNode(xmlDom, sectionNode, "backup_group", optionsConfig.backupGroup) addStringNode(xmlDom, sectionNode, "rcp_command", optionsConfig.rcpCommand) addStringNode(xmlDom, sectionNode, "rsh_command", optionsConfig.rshCommand) addStringNode(xmlDom, sectionNode, "cback_command", optionsConfig.cbackCommand) managedActions = Config._buildCommaSeparatedString(optionsConfig.managedActions) addStringNode(xmlDom, sectionNode, "managed_actions", managedActions) if optionsConfig.overrides is not None: for override in optionsConfig.overrides: Config._addOverride(xmlDom, sectionNode, override) if optionsConfig.hooks is not None: for hook in optionsConfig.hooks: Config._addHook(xmlDom, sectionNode, hook) @staticmethod def _addPeers(xmlDom, parentNode, peersConfig): """ Adds a configuration section as the next child of a parent. We add groups of the following items, one list element per item:: localPeers //cb_config/peers/peer remotePeers //cb_config/peers/peer The individual local and remote peer entries are added by L{_addLocalPeer} and L{_addRemotePeer}, respectively. If C{peersConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param peersConfig: Peers configuration section to be added to the document. """ if peersConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "peers") if peersConfig.localPeers is not None: for localPeer in peersConfig.localPeers: Config._addLocalPeer(xmlDom, sectionNode, localPeer) if peersConfig.remotePeers is not None: for remotePeer in peersConfig.remotePeers: Config._addRemotePeer(xmlDom, sectionNode, remotePeer) @staticmethod def _addCollect(xmlDom, parentNode, collectConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: targetDir //cb_config/collect/collect_dir collectMode //cb_config/collect/collect_mode archiveMode //cb_config/collect/archive_mode ignoreFile //cb_config/collect/ignore_file We also add groups of the following items, one list element per item:: absoluteExcludePaths //cb_config/collect/exclude/abs_path excludePatterns //cb_config/collect/exclude/pattern collectFiles //cb_config/collect/file collectDirs //cb_config/collect/dir The individual collect files are added by L{_addCollectFile} and individual collect directories are added by L{_addCollectDir}. If C{collectConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param collectConfig: Collect configuration section to be added to the document. """ if collectConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "collect") addStringNode(xmlDom, sectionNode, "collect_dir", collectConfig.targetDir) addStringNode(xmlDom, sectionNode, "collect_mode", collectConfig.collectMode) addStringNode(xmlDom, sectionNode, "archive_mode", collectConfig.archiveMode) addStringNode(xmlDom, sectionNode, "ignore_file", collectConfig.ignoreFile) if ((collectConfig.absoluteExcludePaths is not None and collectConfig.absoluteExcludePaths != []) or (collectConfig.excludePatterns is not None and collectConfig.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if collectConfig.absoluteExcludePaths is not None: for absolutePath in collectConfig.absoluteExcludePaths: addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) if collectConfig.excludePatterns is not None: for pattern in collectConfig.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) if collectConfig.collectFiles is not None: for collectFile in collectConfig.collectFiles: Config._addCollectFile(xmlDom, sectionNode, collectFile) if collectConfig.collectDirs is not None: for collectDir in collectConfig.collectDirs: Config._addCollectDir(xmlDom, sectionNode, collectDir) @staticmethod def _addStage(xmlDom, parentNode, stageConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: targetDir //cb_config/stage/staging_dir We also add groups of the following items, one list element per item:: localPeers //cb_config/stage/peer remotePeers //cb_config/stage/peer The individual local and remote peer entries are added by L{_addLocalPeer} and L{_addRemotePeer}, respectively. If C{stageConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param stageConfig: Stage configuration section to be added to the document. """ if stageConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "stage") addStringNode(xmlDom, sectionNode, "staging_dir", stageConfig.targetDir) if stageConfig.localPeers is not None: for localPeer in stageConfig.localPeers: Config._addLocalPeer(xmlDom, sectionNode, localPeer) if stageConfig.remotePeers is not None: for remotePeer in stageConfig.remotePeers: Config._addRemotePeer(xmlDom, sectionNode, remotePeer) @staticmethod def _addStore(xmlDom, parentNode, storeConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: sourceDir //cb_config/store/source_dir mediaType //cb_config/store/media_type deviceType //cb_config/store/device_type devicePath //cb_config/store/target_device deviceScsiId //cb_config/store/target_scsi_id driveSpeed //cb_config/store/drive_speed checkData //cb_config/store/check_data checkMedia //cb_config/store/check_media warnMidnite //cb_config/store/warn_midnite noEject //cb_config/store/no_eject refreshMediaDelay //cb_config/store/refresh_media_delay ejectDelay //cb_config/store/eject_delay Blanking behavior configuration is added by the L{_addBlankBehavior} method. If C{storeConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param storeConfig: Store configuration section to be added to the document. """ if storeConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "store") addStringNode(xmlDom, sectionNode, "source_dir", storeConfig.sourceDir) addStringNode(xmlDom, sectionNode, "media_type", storeConfig.mediaType) addStringNode(xmlDom, sectionNode, "device_type", storeConfig.deviceType) addStringNode(xmlDom, sectionNode, "target_device", storeConfig.devicePath) addStringNode(xmlDom, sectionNode, "target_scsi_id", storeConfig.deviceScsiId) addIntegerNode(xmlDom, sectionNode, "drive_speed", storeConfig.driveSpeed) addBooleanNode(xmlDom, sectionNode, "check_data", storeConfig.checkData) addBooleanNode(xmlDom, sectionNode, "check_media", storeConfig.checkMedia) addBooleanNode(xmlDom, sectionNode, "warn_midnite", storeConfig.warnMidnite) addBooleanNode(xmlDom, sectionNode, "no_eject", storeConfig.noEject) addIntegerNode(xmlDom, sectionNode, "refresh_media_delay", storeConfig.refreshMediaDelay) addIntegerNode(xmlDom, sectionNode, "eject_delay", storeConfig.ejectDelay) Config._addBlankBehavior(xmlDom, sectionNode, storeConfig.blankBehavior) @staticmethod def _addPurge(xmlDom, parentNode, purgeConfig): """ Adds a configuration section as the next child of a parent. We add the following fields to the document:: purgeDirs //cb_config/purge/dir The individual directory entries are added by L{_addPurgeDir}. If C{purgeConfig} is C{None}, then no container will be added. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param purgeConfig: Purge configuration section to be added to the document. """ if purgeConfig is not None: sectionNode = addContainerNode(xmlDom, parentNode, "purge") if purgeConfig.purgeDirs is not None: for purgeDir in purgeConfig.purgeDirs: Config._addPurgeDir(xmlDom, sectionNode, purgeDir) @staticmethod def _addExtendedAction(xmlDom, parentNode, action): """ Adds an extended action container as the next child of a parent. We add the following fields to the document:: name action/name module action/module function action/function index action/index dependencies action/depends Dependencies are added by the L{_addDependencies} method. The node itself is created as the next child of the parent node. This method only adds one action node. The parent must loop for each action in the C{ExtensionsConfig} object. If C{action} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param action: Purge directory to be added to the document. """ if action is not None: sectionNode = addContainerNode(xmlDom, parentNode, "action") addStringNode(xmlDom, sectionNode, "name", action.name) addStringNode(xmlDom, sectionNode, "module", action.module) addStringNode(xmlDom, sectionNode, "function", action.function) addIntegerNode(xmlDom, sectionNode, "index", action.index) Config._addDependencies(xmlDom, sectionNode, action.dependencies) @staticmethod def _addOverride(xmlDom, parentNode, override): """ Adds a command override container as the next child of a parent. We add the following fields to the document:: command override/command absolutePath override/abs_path The node itself is created as the next child of the parent node. This method only adds one override node. The parent must loop for each override in the C{OptionsConfig} object. If C{override} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param override: Command override to be added to the document. """ if override is not None: sectionNode = addContainerNode(xmlDom, parentNode, "override") addStringNode(xmlDom, sectionNode, "command", override.command) addStringNode(xmlDom, sectionNode, "abs_path", override.absolutePath) @staticmethod def _addHook(xmlDom, parentNode, hook): """ Adds an action hook container as the next child of a parent. The behavior varies depending on the value of the C{before} and C{after} flags on the hook. If the C{before} flag is set, it's a pre-action hook, and we'll add the following fields:: action pre_action_hook/action command pre_action_hook/command If the C{after} flag is set, it's a post-action hook, and we'll add the following fields:: action post_action_hook/action command post_action_hook/command The or node itself is created as the next child of the parent node. This method only adds one hook node. The parent must loop for each hook in the C{OptionsConfig} object. If C{hook} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param hook: Command hook to be added to the document. """ if hook is not None: if hook.before: sectionNode = addContainerNode(xmlDom, parentNode, "pre_action_hook") else: sectionNode = addContainerNode(xmlDom, parentNode, "post_action_hook") addStringNode(xmlDom, sectionNode, "action", hook.action) addStringNode(xmlDom, sectionNode, "command", hook.command) @staticmethod def _addCollectFile(xmlDom, parentNode, collectFile): """ Adds a collect file container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path collectMode dir/collect_mode archiveMode dir/archive_mode Note that for consistency with collect directory handling we'll only emit the preferred C{collect_mode} tag. The node itself is created as the next child of the parent node. This method only adds one collect file node. The parent must loop for each collect file in the C{CollectConfig} object. If C{collectFile} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param collectFile: Collect file to be added to the document. """ if collectFile is not None: sectionNode = addContainerNode(xmlDom, parentNode, "file") addStringNode(xmlDom, sectionNode, "abs_path", collectFile.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", collectFile.collectMode) addStringNode(xmlDom, sectionNode, "archive_mode", collectFile.archiveMode) @staticmethod def _addCollectDir(xmlDom, parentNode, collectDir): """ Adds a collect directory container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path collectMode dir/collect_mode archiveMode dir/archive_mode ignoreFile dir/ignore_file linkDepth dir/link_depth dereference dir/dereference recursionLevel dir/recursion_level Note that an original XML document might have listed the collect mode using the C{mode} tag, since we accept both C{collect_mode} and C{mode}. However, here we'll only emit the preferred C{collect_mode} tag. We also add groups of the following items, one list element per item:: absoluteExcludePaths dir/exclude/abs_path relativeExcludePaths dir/exclude/rel_path excludePatterns dir/exclude/pattern The node itself is created as the next child of the parent node. This method only adds one collect directory node. The parent must loop for each collect directory in the C{CollectConfig} object. If C{collectDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param collectDir: Collect directory to be added to the document. """ if collectDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "dir") addStringNode(xmlDom, sectionNode, "abs_path", collectDir.absolutePath) addStringNode(xmlDom, sectionNode, "collect_mode", collectDir.collectMode) addStringNode(xmlDom, sectionNode, "archive_mode", collectDir.archiveMode) addStringNode(xmlDom, sectionNode, "ignore_file", collectDir.ignoreFile) addIntegerNode(xmlDom, sectionNode, "link_depth", collectDir.linkDepth) addBooleanNode(xmlDom, sectionNode, "dereference", collectDir.dereference) addIntegerNode(xmlDom, sectionNode, "recursion_level", collectDir.recursionLevel) if ((collectDir.absoluteExcludePaths is not None and collectDir.absoluteExcludePaths != []) or (collectDir.relativeExcludePaths is not None and collectDir.relativeExcludePaths != []) or (collectDir.excludePatterns is not None and collectDir.excludePatterns != [])): excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") if collectDir.absoluteExcludePaths is not None: for absolutePath in collectDir.absoluteExcludePaths: addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) if collectDir.relativeExcludePaths is not None: for relativePath in collectDir.relativeExcludePaths: addStringNode(xmlDom, excludeNode, "rel_path", relativePath) if collectDir.excludePatterns is not None: for pattern in collectDir.excludePatterns: addStringNode(xmlDom, excludeNode, "pattern", pattern) @staticmethod def _addLocalPeer(xmlDom, parentNode, localPeer): """ Adds a local peer container as the next child of a parent. We add the following fields to the document:: name peer/name collectDir peer/collect_dir ignoreFailureMode peer/ignore_failures Additionally, C{peer/type} is filled in with C{"local"}, since this is a local peer. The node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the C{StageConfig} object. If C{localPeer} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param localPeer: Purge directory to be added to the document. """ if localPeer is not None: sectionNode = addContainerNode(xmlDom, parentNode, "peer") addStringNode(xmlDom, sectionNode, "name", localPeer.name) addStringNode(xmlDom, sectionNode, "type", "local") addStringNode(xmlDom, sectionNode, "collect_dir", localPeer.collectDir) addStringNode(xmlDom, sectionNode, "ignore_failures", localPeer.ignoreFailureMode) @staticmethod def _addRemotePeer(xmlDom, parentNode, remotePeer): """ Adds a remote peer container as the next child of a parent. We add the following fields to the document:: name peer/name collectDir peer/collect_dir remoteUser peer/backup_user rcpCommand peer/rcp_command rcpCommand peer/rcp_command rshCommand peer/rsh_command cbackCommand peer/cback_command ignoreFailureMode peer/ignore_failures managed peer/managed managedActions peer/managed_actions Additionally, C{peer/type} is filled in with C{"remote"}, since this is a remote peer. The node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the C{StageConfig} object. If C{remotePeer} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param remotePeer: Purge directory to be added to the document. """ if remotePeer is not None: sectionNode = addContainerNode(xmlDom, parentNode, "peer") addStringNode(xmlDom, sectionNode, "name", remotePeer.name) addStringNode(xmlDom, sectionNode, "type", "remote") addStringNode(xmlDom, sectionNode, "collect_dir", remotePeer.collectDir) addStringNode(xmlDom, sectionNode, "backup_user", remotePeer.remoteUser) addStringNode(xmlDom, sectionNode, "rcp_command", remotePeer.rcpCommand) addStringNode(xmlDom, sectionNode, "rsh_command", remotePeer.rshCommand) addStringNode(xmlDom, sectionNode, "cback_command", remotePeer.cbackCommand) addStringNode(xmlDom, sectionNode, "ignore_failures", remotePeer.ignoreFailureMode) addBooleanNode(xmlDom, sectionNode, "managed", remotePeer.managed) managedActions = Config._buildCommaSeparatedString(remotePeer.managedActions) addStringNode(xmlDom, sectionNode, "managed_actions", managedActions) @staticmethod def _addPurgeDir(xmlDom, parentNode, purgeDir): """ Adds a purge directory container as the next child of a parent. We add the following fields to the document:: absolutePath dir/abs_path retainDays dir/retain_days The node itself is created as the next child of the parent node. This method only adds one purge directory node. The parent must loop for each purge directory in the C{PurgeConfig} object. If C{purgeDir} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param purgeDir: Purge directory to be added to the document. """ if purgeDir is not None: sectionNode = addContainerNode(xmlDom, parentNode, "dir") addStringNode(xmlDom, sectionNode, "abs_path", purgeDir.absolutePath) addIntegerNode(xmlDom, sectionNode, "retain_days", purgeDir.retainDays) @staticmethod def _addDependencies(xmlDom, parentNode, dependencies): """ Adds a extended action dependencies to parent node. We add the following fields to the document:: runBefore depends/run_before runAfter depends/run_after If C{dependencies} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param dependencies: C{ActionDependencies} object to be added to the document """ if dependencies is not None: sectionNode = addContainerNode(xmlDom, parentNode, "depends") runBefore = Config._buildCommaSeparatedString(dependencies.beforeList) runAfter = Config._buildCommaSeparatedString(dependencies.afterList) addStringNode(xmlDom, sectionNode, "run_before", runBefore) addStringNode(xmlDom, sectionNode, "run_after", runAfter) @staticmethod def _buildCommaSeparatedString(valueList): """ Creates a comma-separated string from a list of values. As a special case, if C{valueList} is C{None}, then C{None} will be returned. @param valueList: List of values to be placed into a string @return: Values from valueList as a comma-separated string. """ if valueList is None: return None return ",".join(valueList) @staticmethod def _addBlankBehavior(xmlDom, parentNode, blankBehavior): """ Adds a blanking behavior container as the next child of a parent. We add the following fields to the document:: blankMode blank_behavior/mode blankFactor blank_behavior/factor The node itself is created as the next child of the parent node. If C{blankBehavior} is C{None}, this method call will be a no-op. @param xmlDom: DOM tree as from L{createOutputDom}. @param parentNode: Parent that the section should be appended to. @param blankBehavior: Blanking behavior to be added to the document. """ if blankBehavior is not None: sectionNode = addContainerNode(xmlDom, parentNode, "blank_behavior") addStringNode(xmlDom, sectionNode, "mode", blankBehavior.blankMode) addStringNode(xmlDom, sectionNode, "factor", blankBehavior.blankFactor) ################################################# # High-level methods used for validating content ################################################# def _validateContents(self): """ Validates configuration contents per rules discussed in module documentation. This is the second pass at validation. It ensures that any filled-in section contains valid data. Any sections which is not set to C{None} is validated per the rules for that section, laid out in the module documentation (above). @raise ValueError: If configuration is invalid. """ self._validateReference() self._validateExtensions() self._validateOptions() self._validatePeers() self._validateCollect() self._validateStage() self._validateStore() self._validatePurge() def _validateReference(self): """ Validates reference configuration. There are currently no reference-related validations. @raise ValueError: If reference configuration is invalid. """ pass def _validateExtensions(self): """ Validates extensions configuration. The list of actions may be either C{None} or an empty list C{[]} if desired. Each extended action must include a name, a module, and a function. Then, if the order mode is None or "index", an index is required; and if the order mode is "dependency", dependency information is required. @raise ValueError: If reference configuration is invalid. """ if self.extensions is not None: if self.extensions.actions is not None: names = [] for action in self.extensions.actions: if action.name is None: raise ValueError("Each extended action must set a name.") names.append(action.name) if action.module is None: raise ValueError("Each extended action must set a module.") if action.function is None: raise ValueError("Each extended action must set a function.") if self.extensions.orderMode is None or self.extensions.orderMode == "index": if action.index is None: raise ValueError("Each extended action must set an index, based on order mode.") elif self.extensions.orderMode == "dependency": if action.dependencies is None: raise ValueError("Each extended action must set dependency information, based on order mode.") checkUnique("Duplicate extension names exist:", names) def _validateOptions(self): """ Validates options configuration. All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose. @raise ValueError: If reference configuration is invalid. """ if self.options is not None: if self.options.startingDay is None: raise ValueError("Options section starting day must be filled in.") if self.options.workingDir is None: raise ValueError("Options section working directory must be filled in.") if self.options.backupUser is None: raise ValueError("Options section backup user must be filled in.") if self.options.backupGroup is None: raise ValueError("Options section backup group must be filled in.") if self.options.rcpCommand is None: raise ValueError("Options section remote copy command must be filled in.") def _validatePeers(self): """ Validates peers configuration per rules in L{_validatePeerList}. @raise ValueError: If peers configuration is invalid. """ if self.peers is not None: self._validatePeerList(self.peers.localPeers, self.peers.remotePeers) def _validateCollect(self): """ Validates collect configuration. The target directory must be filled in. The collect mode, archive mode, ignore file, and recursion level are all optional. The list of absolute paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent C{CollectConfig} object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either C{None} or an empty list C{[]} if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the C{CollectConfig} object to make the complete list for a given directory. @raise ValueError: If collect configuration is invalid. """ if self.collect is not None: if self.collect.targetDir is None: raise ValueError("Collect section target directory must be filled in.") if self.collect.collectFiles is not None: for collectFile in self.collect.collectFiles: if collectFile.absolutePath is None: raise ValueError("Each collect file must set an absolute path.") if self.collect.collectMode is None and collectFile.collectMode is None: raise ValueError("Collect mode must either be set in parent collect section or individual collect file.") if self.collect.archiveMode is None and collectFile.archiveMode is None: raise ValueError("Archive mode must either be set in parent collect section or individual collect file.") if self.collect.collectDirs is not None: for collectDir in self.collect.collectDirs: if collectDir.absolutePath is None: raise ValueError("Each collect directory must set an absolute path.") if self.collect.collectMode is None and collectDir.collectMode is None: raise ValueError("Collect mode must either be set in parent collect section or individual collect directory.") if self.collect.archiveMode is None and collectDir.archiveMode is None: raise ValueError("Archive mode must either be set in parent collect section or individual collect directory.") if self.collect.ignoreFile is None and collectDir.ignoreFile is None: raise ValueError("Ignore file must either be set in parent collect section or individual collect directory.") if (collectDir.linkDepth is None or collectDir.linkDepth < 1) and collectDir.dereference: raise ValueError("Dereference flag is only valid when a non-zero link depth is in use.") def _validateStage(self): """ Validates stage configuration. The target directory must be filled in, and the peers are also validated. Peers are only required in this section if the peers configuration section is not filled in. However, if any peers are filled in here, they override the peers configuration and must meet the validation criteria in L{_validatePeerList}. @raise ValueError: If stage configuration is invalid. """ if self.stage is not None: if self.stage.targetDir is None: raise ValueError("Stage section target directory must be filled in.") if self.peers is None: # In this case, stage configuration is our only configuration and must be valid. self._validatePeerList(self.stage.localPeers, self.stage.remotePeers) else: # In this case, peers configuration is the default and stage configuration overrides. # Validation is only needed if it's stage configuration is actually filled in. if self.stage.hasPeers(): self._validatePeerList(self.stage.localPeers, self.stage.remotePeers) def _validateStore(self): """ Validates store configuration. The device type, drive speed, and blanking behavior are optional. All other values are required. Missing booleans will be set to defaults. If blanking behavior is provided, then both a blanking mode and a blanking factor are required. The image writer functionality in the C{writer} module is supposed to be able to handle a device speed of C{None}. Any caller which needs a "real" (non-C{None}) value for the device type can use C{DEFAULT_DEVICE_TYPE}, which is guaranteed to be sensible. This is also where we make sure that the media type -- which is already a valid type -- matches up properly with the device type. @raise ValueError: If store configuration is invalid. """ if self.store is not None: if self.store.sourceDir is None: raise ValueError("Store section source directory must be filled in.") if self.store.mediaType is None: raise ValueError("Store section media type must be filled in.") if self.store.devicePath is None: raise ValueError("Store section device path must be filled in.") if self.store.deviceType is None or self.store.deviceType == "cdwriter": if self.store.mediaType not in VALID_CD_MEDIA_TYPES: raise ValueError("Media type must match device type.") elif self.store.deviceType == "dvdwriter": if self.store.mediaType not in VALID_DVD_MEDIA_TYPES: raise ValueError("Media type must match device type.") if self.store.blankBehavior is not None: if self.store.blankBehavior.blankMode is None and self.store.blankBehavior.blankFactor is None: raise ValueError("If blanking behavior is provided, all values must be filled in.") def _validatePurge(self): """ Validates purge configuration. The list of purge directories may be either C{None} or an empty list C{[]} if desired. All purge directories must contain a path and a retain days value. @raise ValueError: If purge configuration is invalid. """ if self.purge is not None: if self.purge.purgeDirs is not None: for purgeDir in self.purge.purgeDirs: if purgeDir.absolutePath is None: raise ValueError("Each purge directory must set an absolute path.") if purgeDir.retainDays is None: raise ValueError("Each purge directory must set a retain days value.") def _validatePeerList(self, localPeers, remotePeers): """ Validates the set of local and remote peers. Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section. @param localPeers: List of local peers @param remotePeers: List of remote peers @raise ValueError: If stage configuration is invalid. """ if localPeers is None and remotePeers is None: raise ValueError("Peer list must contain at least one backup peer.") if localPeers is None and remotePeers is not None: if len(remotePeers) < 1: raise ValueError("Peer list must contain at least one backup peer.") elif localPeers is not None and remotePeers is None: if len(localPeers) < 1: raise ValueError("Peer list must contain at least one backup peer.") elif localPeers is not None and remotePeers is not None: if len(localPeers) + len(remotePeers) < 1: raise ValueError("Peer list must contain at least one backup peer.") names = [] if localPeers is not None: for localPeer in localPeers: if localPeer.name is None: raise ValueError("Local peers must set a name.") names.append(localPeer.name) if localPeer.collectDir is None: raise ValueError("Local peers must set a collect directory.") if remotePeers is not None: for remotePeer in remotePeers: if remotePeer.name is None: raise ValueError("Remote peers must set a name.") names.append(remotePeer.name) if remotePeer.collectDir is None: raise ValueError("Remote peers must set a collect directory.") if (self.options is None or self.options.backupUser is None) and remotePeer.remoteUser is None: raise ValueError("Remote user must either be set in options section or individual remote peer.") if (self.options is None or self.options.rcpCommand is None) and remotePeer.rcpCommand is None: raise ValueError("Remote copy command must either be set in options section or individual remote peer.") if remotePeer.managed: if (self.options is None or self.options.rshCommand is None) and remotePeer.rshCommand is None: raise ValueError("Remote shell command must either be set in options section or individual remote peer.") if (self.options is None or self.options.cbackCommand is None) and remotePeer.cbackCommand is None: raise ValueError("Remote cback command must either be set in options section or individual remote peer.") if ((self.options is None or self.options.managedActions is None or len(self.options.managedActions) < 1) and (remotePeer.managedActions is None or len(remotePeer.managedActions) < 1)): raise ValueError("Managed actions list must be set in options section or individual remote peer.") checkUnique("Duplicate peer names exist:", names) ######################################################################## # General utility functions ######################################################################## def readByteQuantity(parent, name): """ Read a byte size value from an XML document. A byte size value is an interpreted string value. If the string value ends with "MB" or "GB", then the string before that is interpreted as megabytes or gigabytes. Otherwise, it is intepreted as bytes. @param parent: Parent node to search beneath. @param name: Name of node to search for. @return: ByteQuantity parsed from XML document """ data = readString(parent, name) if data is None: return None data = data.strip() if data.endswith("KB"): quantity = data[0:data.rfind("KB")].strip() units = UNIT_KBYTES elif data.endswith("MB"): quantity = data[0:data.rfind("MB")].strip() units = UNIT_MBYTES elif data.endswith("GB"): quantity = data[0:data.rfind("GB")].strip() units = UNIT_GBYTES else: quantity = data.strip() units = UNIT_BYTES return ByteQuantity(quantity, units) def addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity): """ Adds a text node as the next child of a parent, to contain a byte size. If the C{byteQuantity} is None, then the node will be created, but will be empty (i.e. will contain no text node child). The size in bytes will be normalized. If it is larger than 1.0 GB, it will be shown in GB ("1.0 GB"). If it is larger than 1.0 MB ("1.0 MB"), it will be shown in MB. Otherwise, it will be shown in bytes ("423413"). @param xmlDom: DOM tree as from C{impl.createDocument()}. @param parentNode: Parent node to create child for. @param nodeName: Name of the new container node. @param byteQuantity: ByteQuantity object to put into the XML document @return: Reference to the newly-created node. """ if byteQuantity is None: byteString = None elif byteQuantity.units == UNIT_KBYTES: byteString = "%s KB" % byteQuantity.quantity elif byteQuantity.units == UNIT_MBYTES: byteString = "%s MB" % byteQuantity.quantity elif byteQuantity.units == UNIT_GBYTES: byteString = "%s GB" % byteQuantity.quantity else: byteString = byteQuantity.quantity return addStringNode(xmlDom, parentNode, nodeName, byteString) CedarBackup2-2.26.5/CREDITS0000664000175000017500000002531012642035403016576 0ustar pronovicpronovic00000000000000# vim: set ft=text80: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Project : Cedar Backup, release 2 # Purpose : Credits for package # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ########## # Credits ########## Most of the source code in this project was written by Kenneth J. Pronovici. Some portions have been based on other pieces of open-source software, as indicated in the source code itself. Unless otherwise indicated, all Cedar Backup source code is Copyright (c) 2004-2011,2013-2016 Kenneth J. Pronovici and is released under the GNU General Public License, version 2. The contents of the GNU General Public License can be found in the LICENSE file, or can be downloaded from http://www.gnu.org/. Various patches have been contributed to the Cedar Backup codebase by Dmitry Rutsky. Major contributions include the initial implementation for the optimized media blanking strategy as well as improvements to the DVD writer implementation. The PostgreSQL extension was contributed by Antoine Beaupre ("The Anarcat"), based on the existing MySQL extension. Lukasz K. Nowak helped debug the split functionality and also provided patches for parts of the documentation. Zoran Bosnjak contributed changes to collect.py to implement recursive collect behavior based on recursion level. Jan Medlock contributed patches to improve the manpage and to support recent versions of the /usr/bin/split command. Minor code snippets derived from newsgroup and mailing list postings are not generally attributed unless I used someone else's source code verbatim. Source code annotated as "(c) 2001, 2002 Python Software Foundation" was originally taken from or derived from code within the Python 2.3 codebase. This code was released under the Python 2.3 license, which is an MIT-style academic license. Items under this license include the function util.getFunctionReference(). Source code annotated as "(c) 2000-2004 CollabNet" was originally released under the CollabNet License, which is an Apache/BSD-style license. Items under this license include basic markup and stylesheets used in creating the user manual. The dblite.dtd and readme-dblite.html files are also assumed to be under the CollabNet License, since they were found as part of the Subversion source tree and did not specify an explicit copyright notice. Some of the PDF-specific graphics in the user manual (now obsolete and orphaned off in the doc/pdf directory) were either directly taken from or were derived from images distributed in Norman Walsh's Docbook XSL distribution. These graphics are (c) 1999, 2000, 2001 Norman Walsh and were originally released under a BSD-style license as documented below. Source code annotated as "(c) 2000 Fourthought Inc, USA" was taken from or derived from code within the PyXML distribution and was originally part of the 4DOM suite developed by Fourthought, Inc. Fourthought released the code under a BSD-like license. Items under this license include the XML pretty-printing functionality implemented in xmlutil.py. #################### # CollabNet License #################### /* ================================================================ * Copyright (c) 2000-2004 CollabNet. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * 3. The end-user documentation included with the redistribution, if * any, must include the following acknowledgment: "This product includes * software developed by CollabNet (http://www.Collab.Net/)." * Alternately, this acknowledgment may appear in the software itself, if * and wherever such third-party acknowledgments normally appear. * * 4. The hosted project names must not be used to endorse or promote * products derived from this software without prior written * permission. For written permission, please contact info@collab.net. * * 5. Products derived from this software may not use the "Tigris" name * nor may "Tigris" appear in their names without prior written * permission of CollabNet. * * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL COLLABNET OR ITS CONTRIBUTORS BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE * GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER * IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of CollabNet. */ ##################### # Python 2.3 License ##################### PSF LICENSE AGREEMENT FOR PYTHON 2.3 ------------------------------------ 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using Python 2.3 software in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python 2.3 alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved" are retained in Python 2.3 alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python 2.3 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python 2.3. 4. PSF is making Python 2.3 available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 2.3 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 2.3 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 2.3, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python 2.3, Licensee agrees to be bound by the terms and conditions of this License Agreement. ################## # Docbook License ################## Copyright --------- Copyright (C) 1999, 2000, 2001 Norman Walsh Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ``Software''), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Except as contained in this notice, the names of individuals credited with contribution to this software shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Software without prior written authorization from the individuals in question. Any stylesheet derived from this Software that is publically distributed will be identified with a different name and the version strings in any derived Software will be changed so that no possibility of confusion between the derived package and this Software will exist. Warranty -------- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL NORMAN WALSH OR ANY OTHER CONTRIBUTOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ###################### # Fourthought License ###################### Copyright (c) 2000 Fourthought Inc, USA All Rights Reserved Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of FourThought LLC not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. FOURTHOUGHT LLC DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL FOURTHOUGHT BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. CedarBackup2-2.26.5/LICENSE0000664000175000017500000004311712555716576016614 0ustar pronovicpronovic00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19yy name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. CedarBackup2-2.26.5/manual/0002775000175000017500000000000012642035650017040 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/manual/Makefile0000664000175000017500000000765112555065474020521 0ustar pronovicpronovic00000000000000# vim: set ft=make: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Make # Project : Cedar Backup, release 2 # Purpose : Makefile used for building the Cedar Backup manual. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######## # Notes ######## # This Makefile was originally taken from the Subversion project's book # (http://svnbook.red-bean.com/) and has been substantially modifed (almost # completely rewritten) for use with Cedar Backup. # # The original Makefile was (c) 2000-2004 CollabNet (see CREDITS). ######################## # Programs and commands ######################## CP = cp INSTALL = install MKDIR = mkdir RM = rm XSLTPROC = xsltproc W3M = w3m ############ # Locations ############ INSTALL_DIR = ../doc/manual XSL_DIR = ../util/docbook STYLES_CSS = $(XSL_DIR)/styles.css XSL_FO = $(XSL_DIR)/fo-stylesheet.xsl XSL_HTML = $(XSL_DIR)/html-stylesheet.xsl XSL_CHUNK = $(XSL_DIR)/chunk-stylesheet.xsl MANUAL_TOP = . MANUAL_DIR = $(MANUAL_TOP)/src MANUAL_CHUNK_DIR = $(MANUAL_DIR)/chunk MANUAL_HTML_TARGET = $(MANUAL_DIR)/manual.html MANUAL_CHUNK_TARGET = $(MANUAL_CHUNK_DIR)/index.html # index.html is created last MANUAL_TEXT_TARGET = $(MANUAL_DIR)/manual.txt MANUAL_XML_SOURCE = $(MANUAL_DIR)/book.xml MANUAL_ALL_SOURCE = $(MANUAL_DIR)/*.xml MANUAL_HTML_IMAGES = $(MANUAL_DIR)/images/html/*.png ############################################# # High-level targets and simple dependencies ############################################# all: manual-html manual-chunk install: install-manual-html install-manual-chunk install-manual-text clean: -@$(RM) -f $(MANUAL_HTML_TARGET) $(MANUAL_FO_TARGET) $(MANUAL_TEXT_TARGET) -@$(RM) -rf $(MANUAL_CHUNK_DIR) $(INSTALL_DIR): $(INSTALL) --mode=775 -d $(INSTALL_DIR) ################### # HTML build rules ################### manual-html: $(MANUAL_HTML_TARGET) $(MANUAL_HTML_TARGET): $(MANUAL_ALL_SOURCE) $(XSLTPROC) --output $(MANUAL_HTML_TARGET) $(XSL_HTML) $(MANUAL_XML_SOURCE) install-manual-html: $(MANUAL_HTML_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=775 -d $(INSTALL_DIR)/images $(INSTALL) --mode=664 $(MANUAL_HTML_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=664 $(STYLES_CSS) $(INSTALL_DIR) $(INSTALL) --mode=664 $(MANUAL_HTML_IMAGES) $(INSTALL_DIR)/images ########################### # Chunked HTML build rules ##################*######## manual-chunk: $(MANUAL_CHUNK_TARGET) # The trailing slash in the $(XSLTPROC) command is essential, so that xsltproc will output pages to the dir $(MANUAL_CHUNK_TARGET): $(MANUAL_ALL_SOURCE) $(STYLES_CSS) $(MANUAL_HTML_IMAGES) $(MKDIR) -p $(MANUAL_CHUNK_DIR) $(MKDIR) -p $(MANUAL_CHUNK_DIR)/images $(XSLTPROC) --output $(MANUAL_CHUNK_DIR)/ $(XSL_CHUNK) $(MANUAL_XML_SOURCE) $(CP) $(STYLES_CSS) $(MANUAL_CHUNK_DIR) $(CP) $(MANUAL_HTML_IMAGES) $(MANUAL_CHUNK_DIR)/images install-manual-chunk: $(MANUAL_CHUNK_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=775 -d $(INSTALL_DIR)/images $(INSTALL) --mode=664 $(MANUAL_CHUNK_DIR)/*.html $(INSTALL_DIR) $(INSTALL) --mode=664 $(STYLES_CSS) $(INSTALL_DIR) $(INSTALL) --mode=664 $(MANUAL_HTML_IMAGES) $(INSTALL_DIR)/images ################### # Text build rules ################### manual-text: manual-html $(MANUAL_TEXT_TARGET) $(MANUAL_TEXT_TARGET): $(W3M) -dump -cols 80 $(MANUAL_HTML_TARGET) > $(MANUAL_TEXT_TARGET) install-manual-text: $(MANUAL_TEXT_TARGET) $(INSTALL_DIR) $(INSTALL) --mode=664 $(MANUAL_TEXT_TARGET) $(INSTALL_DIR) CedarBackup2-2.26.5/manual/src/0002775000175000017500000000000012642035650017627 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/manual/src/book.xml0000664000175000017500000000772112555717311021314 0ustar pronovicpronovic00000000000000 ]> Cedar Backup 2 Software Manual First Kenneth J. Pronovici Juliana E. Pronovici 2005-2008,2013-2015 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA &preface; &intro; &basic; &install; &commandline; &config; &extensions; &extenspec; &depends; &recovering; &securingssh; ©right; CedarBackup2-2.26.5/manual/src/config.xml0000664000175000017500000061160412555742707021637 0ustar pronovicpronovic00000000000000 Configuration Overview Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy. First, familiarize yourself with the concepts in . In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in . Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over (in ) to become familiar with the command line interface. Then, look over (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback.conf) or in some other location. After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done. Which Platform? Cedar Backup has been designed for use on all UNIX-like systems. However, since it was developed on a Debian GNU/Linux system, and because I am a Debian developer, the packaging is prettier and the setup is somewhat simpler on a Debian system than on a system where you install from source. The configuration instructions below have been generalized so they should work well regardless of what platform you are running (i.e. RedHat, Gentoo, FreeBSD, etc.). If instructions vary for a particular platform, you will find a note related to that platform. I am always open to adding more platform-specific hints and notes, so write me if you find problems with these instructions. Configuration File Format Cedar Backup is configured through an XML See for a basic introduction to XML. configuration file, usually called /etc/cback.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions. All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. See , in . The extensions section is always optional and can be omitted unless extensions are in use. Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files Ken and ken might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ken will only match the file if it is actually on the filesystem with a lower-case k as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the Mac Mindset. Sample Configuration File Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes its sample in /usr/share/doc/cedar-backup2/examples/cback.conf.sample. This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections. <?xml version="1.0"?> <cb_config> <reference> <author>Kenneth J. Pronovici</author> <revision>1.3</revision> <description>Sample</description> </reference> <options> <starting_day>tuesday</starting_day> <working_dir>/opt/backup/tmp</working_dir> <backup_user>backup</backup_user> <backup_group>group</backup_group> <rcp_command>/usr/bin/scp -B</rcp_command> </options> <peers> <peer> <name>debian</name> <type>local</type> <collect_dir>/opt/backup/collect</collect_dir> </peer> </peers> <collect> <collect_dir>/opt/backup/collect</collect_dir> <collect_mode>daily</collect_mode> <archive_mode>targz</archive_mode> <ignore_file>.cbignore</ignore_file> <dir> <abs_path>/etc</abs_path> <collect_mode>incr</collect_mode> </dir> <file> <abs_path>/home/root/.profile</abs_path> <collect_mode>weekly</collect_mode> </file> </collect> <stage> <staging_dir>/opt/backup/staging</staging_dir> </stage> <store> <source_dir>/opt/backup/staging</source_dir> <media_type>cdrw-74</media_type> <device_type>cdwriter</device_type> <target_device>/dev/cdrw</target_device> <target_scsi_id>0,0,0</target_scsi_id> <drive_speed>4</drive_speed> <check_data>Y</check_data> <check_media>Y</check_media> <warn_midnite>Y</warn_midnite> </store> <purge> <dir> <abs_path>/opt/backup/stage</abs_path> <retain_days>7</retain_days> </dir> <dir> <abs_path>/opt/backup/collect</abs_path> <retain_days>0</retain_days> </dir> </purge> </cb_config> Reference Configuration The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired. This is an example reference configuration section: <reference> <author>Kenneth J. Pronovici</author> <revision>Revision 1.3</revision> <description>Sample</description> <generator>Yet to be Written Config Tool (tm)</description> </reference> The following elements are part of the reference configuration section: author Author of the configuration file. Restrictions: None revision Revision of the configuration file. Restrictions: None description Description of the configuration file. Restrictions: None generator Tool that generated the configuration file, if any. Restrictions: None Options Configuration The options configuration section contains configuration options that are not specific to any one action. This is an example options configuration section: <options> <starting_day>tuesday</starting_day> <working_dir>/opt/backup/tmp</working_dir> <backup_user>backup</backup_user> <backup_group>backup</backup_group> <rcp_command>/usr/bin/scp -B</rcp_command> <rsh_command>/usr/bin/ssh</rsh_command> <cback_command>/usr/bin/cback</cback_command> <managed_actions>collect, purge</managed_actions> <override> <command>cdrecord</command> <abs_path>/opt/local/bin/cdrecord</abs_path> </override> <override> <command>mkisofs</command> <abs_path>/opt/local/bin/mkisofs</abs_path> </override> <pre_action_hook> <action>collect</action> <command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command> </pre_action_hook> <post_action_hook> <action>collect</action> <command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command> </post_action_hook> </options> The following elements are part of the options configuration section: starting_day Day that starts the week. Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared. Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive. working_dir Working (temporary) directory to use for backups. This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups. The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master). Restrictions: Must be an absolute path backup_user Effective user that backups should run as. This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced). This value is also used as the default remote backup user for remote peers. Restrictions: Must be non-empty backup_group Effective group that backups should run as. This group must exist on the machine which is being configured, and should not be root or some other powerful group (although that restriction is not enforced). Restrictions: Must be non-empty rcp_command Default rcp-compatible copy command for staging. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway. Restrictions: Must be non-empty rsh_command Default rsh-compatible command to use for remote shells. The rsh command should be the exact command used for remote shells, including any required options. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty cback_command Default cback-compatible command to use on managed remote clients. The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Default set of actions that are managed on remote clients. This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty. override Command to override with a customized path. This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: command Name of the command to be overridden, i.e. cdrecord. Restrictions: Must be a non-empty string. abs_path The absolute path where the overridden command can be found. Restrictions: Must be an absolute path. pre_action_hook Hook configuring a command to be executed before an action. This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. post_action_hook Hook configuring a command to be executed after an action. This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. Peers Configuration The peers configuration section contains a list of the peers managed by a master. This section is only required on a master. This is an example peers configuration section: <peers> <peer> <name>machine1</name> <type>local</type> <collect_dir>/opt/backup/collect</collect_dir> </peer> <peer> <name>machine2</name> <type>remote</type> <backup_user>backup</backup_user> <collect_dir>/opt/backup/collect</collect_dir> <ignore_failures>all</ignore_failures> </peer> <peer> <name>machine3</name> <type>remote</type> <managed>Y</managed> <backup_user>backup</backup_user> <collect_dir>/opt/backup/collect</collect_dir> <rcp_command>/usr/bin/scp</rcp_command> <rsh_command>/usr/bin/ssh</rsh_command> <cback_command>/usr/bin/cback</cback_command> <managed_actions>collect, purge</managed_actions> </peer> </peers> The following elements are part of the peers configuration section: peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer managed by a master. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. managed Indicates whether this peer is managed. A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. rsh_command The rsh-compatible command for this peer. The rsh command should be the exact command used for remote shells, including any required options. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section. Restrictions: Must be non-empty cback_command The cback-compatible command for this peer. The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default cback command from the options section. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Set of actions that are managed for this peer. This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section. Restrictions: Must be non-empty. Collect Configuration The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up. Using a Link Farm Sometimes, it's not very convenient to list directories one by one in the Cedar Backup configuration file. For instance, when backing up your home directory, you often exclude as many directories as you include. The ignore file mechanism can be of some help, but it still isn't very convenient if there are a lot of directories to ignore (or if new directories pop up all of the time). In this situation, one option is to use a link farm rather than listing all of the directories in configuration. A link farm is a directory that contains nothing but a set of soft links to other files and directories. Normally, Cedar Backup does not follow soft links, but you can override this behavior for individual directories using the link_depth and dereference options (see below). When using a link farm, you still have to deal with each backed-up directory individually, but you don't have to modify configuration. Some users find that this works better for them. In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed. This is an example collect configuration section: <collect> <collect_dir>/opt/backup/collect</collect_dir> <collect_mode>daily</collect_mode> <archive_mode>targz</archive_mode> <ignore_file>.cbignore</ignore_file> <exclude> <abs_path>/etc</abs_path> <pattern>.*\.conf</pattern> </exclude> <file> <abs_path>/home/root/.profile</abs_path> </file> <dir> <abs_path>/etc</abs_path> </dir> <dir> <abs_path>/var/log</abs_path> <collect_mode>incr</collect_mode> </dir> <dir> <abs_path>/opt</abs_path> <collect_mode>weekly</collect_mode> <exclude> <abs_path>/opt/large</abs_path> <rel_path>backup</rel_path> <pattern>.*tmp</pattern> </exclude> </dir> </collect> The following elements are part of the collect configuration section: collect_dir Directory to collect files into. On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory. This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form. Restrictions: Must be an absolute path collect_mode Default collect mode. The collect mode describes how frequently a directory is backed up. See (in ) for more information. This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Default archive mode for collect files. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Default ignore file name. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be non-empty recursion_level Recursion level to use when collecting directories. This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory. Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory. The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc. Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high. This field is optional. if it doesn't exist, the backup will use the default recursion level of zero. Restrictions: Must be an integer. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however. This section is optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. pattern A pattern to be recursively excluded from the backup. The pattern must be a Python regular expression. See It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty file A file to be collected. This is a subsection which contains information about a specific file to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect file subsection contains the following fields: abs_path Absolute path of the file to collect. Restrictions: Must be an absolute path. collect_mode Collect mode for this file The collect mode describes how frequently a file is backed up. See (in ) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this file. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. dir A directory to be collected. This is a subsection which contains information about a specific directory to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect directory subsection contains the following fields: abs_path Absolute path of the directory to collect. The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level. The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc. Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up. Restrictions: Must be an absolute path. collect_mode Collect mode for this directory The collect mode describes how frequently a directory is backed up. See (in ) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this directory. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Ignore file name for this directory. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This field is optional. If it doesn't exist, the backup will use the default ignore file name. Restrictions: Must be non-empty link_depth Link depth value to use for this directory. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc. This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed. Restrictions: If set, must be an integer ≥ 0. dereference Whether to dereference soft links. If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well. This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory. This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced. Restrictions: Must be a boolean (Y or N). exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. rel_path A relative path to be recursively excluded from the backup. The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/something/else. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Stage Configuration The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to. This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging. This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration: <stage> <staging_dir>/opt/backup/stage</staging_dir> </stage> This is an example stage configuration section that overrides the default list of peers: <stage> <staging_dir>/opt/backup/stage</staging_dir> <peer> <name>machine1</name> <type>local</type> <collect_dir>/opt/backup/collect</collect_dir> </peer> <peer> <name>machine2</name> <type>remote</type> <backup_user>backup</backup_user> <collect_dir>/opt/backup/collect</collect_dir> </peer> </stage> The following elements are part of the stage configuration section: staging_dir Directory to stage files into. This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer daystrom backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself. This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space. Restrictions: Must be an absolute path peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. Store Configuration The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device. This is an example store configuration section: <store> <source_dir>/opt/backup/stage</source_dir> <media_type>cdrw-74</media_type> <device_type>cdwriter</device_type> <target_device>/dev/cdrw</target_device> <target_scsi_id>0,0,0</target_scsi_id> <drive_speed>4</drive_speed> <check_data>Y</check_data> <check_media>Y</check_media> <warn_midnite>Y</warn_midnite> <no_eject>N</no_eject> <refresh_media_delay>15</refresh_media_delay> <eject_delay>2</eject_delay> <blank_behavior> <mode>weekly</mode> <factor>1.3</factor> </blank_behavior> </store> The following elements are part of the store configuration section: source_dir Directory whose contents should be written to media. This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc. Restrictions: Must be an absolute path device_type Type of the device used to write the media. This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter). This field is optional. If it doesn't exist, the cdwriter device type is assumed. Restrictions: If set, must be either cdwriter or dvdwriter. media_type Type of the media in the device. Unless you want to throw away a backup disc every week, you are probably best off using rewritable media. You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see (in ). Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter. target_device Filesystem device name for writer device. This value is required for both CD writers and DVD writers. This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw. In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified. Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled. Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink. Restrictions: Must be an absolute path. target_scsi_id SCSI id for the writer device. This value is optional for CD writers and is ignored for DVD writers. If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord. Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord. For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form <method>:scsibus,target,lun. An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord). See for more information on writer devices and how they are configured. Restrictions: If set, must be a valid SCSI identifier. drive_speed Speed of the drive, i.e. 2 for a 2x device. This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed. For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media. Restrictions: If set, must be an integer ≥ 1. check_data Whether the media should be validated. This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch. Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). check_media Whether the media should be checked before writing to it. By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.) If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day. Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). no_eject Indicates that the writer device should not be ejected. Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session). For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer. Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). refresh_media_delay Number of seconds to delay after refreshing media This field is optional. If it doesn't exist, no delay will occur. Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds. Restrictions: If set, must be an integer ≥ 1. eject_delay Number of seconds to delay after ejecting the tray This field is optional. If it doesn't exist, no delay will occur. If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly — either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds. Restrictions: If set, must be an integer ≥ 1. blank_behavior Optimized blanking strategy. For more information about Cedar Backup's optimized blanking strategy, see . This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor. blank_mode Blanking mode. Restrictions:Must be one of "daily" or "weekly". blank_factor Blanking factor. Restrictions:Must be a floating point number ≥ 0. Purge Configuration The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged. Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0). If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action. You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups. This is an example purge configuration section: <purge> <dir> <abs_path>/opt/backup/stage</abs_path> <retain_days>7</retain_days> </dir> <dir> <abs_path>/opt/backup/collect</abs_path> <retain_days>0</retain_days> </dir> </purge> The following elements are part of the purge configuration section: dir A directory to purge within. This is a subsection which contains information about a specific directory to purge within. This section can be repeated as many times as is necessary. At least one purge directory must be configured. The purge directory subsection contains the following fields: abs_path Absolute path of the directory to purge within. The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than retain days days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed. The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files. Restrictions: Must be an absolute path. retain_days Number of days to retain old files. Once it has been more than this many days since a file was last modified, it is a candidate for removal. Restrictions: Must be an integer ≥ 0. Extensions Configuration The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional. Extensions configuration is used to specify extended actions implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions. Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400. Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory. If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed — and you would get no warning about this in your email! So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the database command-line action. You have been told that this function is called foo.bar(). You think of this backup as a collect kind of action, so you want it to be performed immediately before the collect action. To configure this extension, you would list an action with a name database, a module foo, a function name bar and an index of 99. This is how the hypothetical action would be configured: <extensions> <action> <name>database</name> <module>foo</module> <function>bar</function> <index>99</index> </action> </extensions> The following elements are part of the extensions configuration section: action This is a subsection that contains configuration related to a single extended action. This section can be repeated as many times as is necessary. The action subsection contains the following fields: name Name of the extended action. Restrictions: Must be a non-empty string consisting of only lower-case letters and digits. module Name of the Python module associated with the extension function. Restrictions: Must be a non-empty string and a valid Python identifier. function Name of the Python extension function within the module. Restrictions: Must be a non-empty string and a valid Python identifier. index Index of action, for execution ordering. Restrictions: Must be an integer ≥ 0. Setting up a Pool of One Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one). Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. This setup procedure discusses how to set up Cedar Backup in the normal case for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See for more information on writer devices and how they are configured. There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the option). Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidential information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately. Step 8: Test your backup. Place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors and also mount the CD/DVD disc to be sure it can be read. If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. See . To be safe, always enable the consistency check option in the store configuration section. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file: 30 00 * * * root cback all Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory: #/bin/sh cback all You should consider adding the or switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Single machine (pool of one) entry in the file, and change the line so that the backup goes off when you want it to. Setting up a Client Peer Node Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. See for some important notes on how to optionally further secure password-less SSH connections to your clients. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure the master in your backup pool. You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client. To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub: user@machine> cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69 uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600. If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night). You should create a collect directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the option). Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately. Step 8: Test your backup. Use the command cback --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback collect 30 06 * * * root cback purge You should consider adding the or switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. See in . For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Client machine entries in the file, and change the lines so that the backup goes off when you want it to. Setting up a Master Peer Node Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. This setup procedure discusses how to set up Cedar Backup in the normal case for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See for more information on writer devices and how they are configured. There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge. Note that the master can treat itself as a client peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master. Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a consolidation point machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the option). Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately. Step 8: Test connectivity to client machines. This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client. Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine. If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients. Step 9: Test your backup. Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.) When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read. You may also want to run cback purge on the master and each client once you have finished validating that everything worked. If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. To be safe, always enable the consistency check option in the store configuration section. Step 10: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback collect 30 02 * * * root cback stage 30 04 * * * root cback store 30 06 * * * root cback purge You should consider adding the or switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Master machine entries in the file, and change the lines so that the backup goes off when you want it to. Configuring your Writer Device Device Types In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware. Devices identified by by device name For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify <target_device> in configuration. You can either leave <target_scsi_id> blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations — for instance, when the media needs to be mounted to run the consistency check. Devices identified by SCSI id Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type. In order to use a SCSI device with Cedar Backup, you must know both the SCSI id <target_scsi_id> and the device name <target_device>. The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations. A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system. On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in <target_device> and the SCSI id in <target_scsi_id>, just like for a real SCSI device. You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ATA:1,1,1). Linux Notes On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later). Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a method indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values. However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation. Finding your Linux CD Writer Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path: cdrecord -prcap dev=/dev/cdrom Running this command on my hardware gives output that looks like this (just the top few lines): Device type : Removable CD-ROM Version : 0 Response Format: 2 Capabilities : Vendor_info : 'LITE-ON ' Identification : 'DVDRW SOHW-1673S' Revision : 'JS02' Device seems to be: Generic mmc2 DVD-R/DVD-RW. Drive capabilities, per MMC-3 page 2A: If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into <target_device> and leave <target_scsi_id> blank. If this doesn't work, you should try to find an ATA or ATAPI device: cdrecord -scanbus dev=ATA cdrecord -scanbus dev=ATAPI On my development system, I get a result that looks something like this for ATA: scsibus1: 1,0,0 100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM 1,1,0 101) * 1,2,0 102) * 1,3,0 103) * 1,4,0 104) * 1,5,0 105) * 1,6,0 106) * 1,7,0 107) * Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0) into <target_scsi_id>. Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO () or the ATA RAID HOWTO () for more information. Mac OS X Notes On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. If you are interested in some of my notes about what works and what doesn't on this platform, check out the documentation in the doc/osx directory in the source distribution. Optimized Blanking Stategy When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period. Since rewritable media can be blanked only a finite number of times before becoming unusable, some users — especially users of rewritable DVD media with its large capacity — may prefer to blank the media less often. If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked. This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected). There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data. If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup. If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true: bytes available / (1 + bytes required) ≤ blanking factor Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate: Total size of weekly backup / Full backup size at the start of the week This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week: /opt/backup/staging# du -s 2007/03/* 3040 2007/03/01 3044 2007/03/02 6812 2007/03/03 3044 2007/03/04 3152 2007/03/05 3056 2007/03/06 3060 2007/03/07 3056 2007/03/08 4776 2007/03/09 6812 2007/03/10 11824 2007/03/11 In this case, the ratio is approximately 4: 6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571 To be safe, you might choose to configure a factor of 5.0. Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary. If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used. CedarBackup2-2.26.5/manual/src/install.xml0000664000175000017500000003003412556154617022026 0ustar pronovicpronovic00000000000000 Installation Background There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc. If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself. Non-Linux Platforms Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 2, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python 2 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided further on in this chapter. Installing on a Debian System The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude. If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian etch release is the first release to contain Cedar Backup 2.) Otherwise, you need to install from the Cedar Solutions APT data source. See To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file. After you have configured the proper APT data source, install Cedar Backup using this set of commands: $ apt-get update $ apt-get install cedar-backup2 cedar-backup2-doc Several of the Cedar Backup dependencies are listed as recommended rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them. If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source. In either case, once the package has been installed, you can proceed to configuration as described in . The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package. Installing from Source On platforms other than Debian, Cedar Backup is installed from a Python source distribution. See . You will have to manage dependencies on your own. Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out . This appendix provides links to upstream source packages, plus as much information as I have been able to gather about packages for non-Debian platforms. Installing Dependencies Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met. Cedar Backup is written in Python 2 and requires version 2.7 or greater of the language. Python 2.7 was originally released on 4 Jul 2010, and is the last supported release of Python 2. As of this writing, all current Linux and BSD distributions include it. You must install Python 2 on every peer node in a pool (master or client). Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines. Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action: mkisofs eject mount unmount volname Then, you need this utility if you are writing CD media: cdrecord or these utilities if you are writing DVD media: growisofs All of these utilities are common and are easy to find for almost any UNIX-like operating system. Installing the Source Package Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py. Once you have downloaded the source package from the Cedar Solutions website, untar it: $ zcat CedarBackup2-2.0.0.tar.gz | tar xvf - This will create a directory called (in this case) CedarBackup2-2.0.0. The version number in the directory will always match the version number in the filename. If you have root access and want to install the package to the standard Python location on your system, then you can install the package in two simple steps: $ cd CedarBackup2-2.0.0 $ python setup.py install Make sure that you are using Python 2.7 or better to execute setup.py. You may also wish to run the unit tests before actually installing anything. Run them like so: python util/test.py If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. support@cedar-solutions.com This is particularly important for non-Linux platforms where I do not have a test system available to me. Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the option: $ python setup.py --help $ python setup.py install --help In any case, once the package has been installed, you can proceed to configuration as described in . CedarBackup2-2.26.5/manual/src/recovering.xml0000664000175000017500000007313312555717034022527 0ustar pronovicpronovic00000000000000 Data Recovery Finding your Data The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.) Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name. This is the root directory of my example disc: root:/mnt/cdrw# ls -l total 4 drwxr-x--- 3 backup backup 4096 Sep 01 06:30 2005/ In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006). Within each year directory is one subdirectory for each month represented in the backup. root:/mnt/cdrw/2005# ls -l total 2 dr-xr-xr-x 6 root root 2048 Sep 11 05:30 09/ In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005). Within each month directory is one subdirectory for each day represented in the backup. root:/mnt/cdrw/2005/09# ls -l total 8 dr-xr-xr-x 5 root root 2048 Sep 7 05:30 07/ dr-xr-xr-x 5 root root 2048 Sep 8 05:30 08/ dr-xr-xr-x 5 root root 2048 Sep 9 05:30 09/ dr-xr-xr-x 5 root root 2048 Sep 11 05:30 11/ Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven. Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup: root:/mnt/cdrw/2005/09/07# ls -l total 10 dr-xr-xr-x 2 root root 2048 Sep 7 02:31 host1/ -r--r--r-- 1 root root 0 Sep 7 03:27 cback.stage dr-xr-xr-x 2 root root 4096 Sep 7 02:30 host2/ dr-xr-xr-x 2 root root 4096 Sep 7 03:23 host3/ In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27. Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files collected from Cedar Backup extensions or by other third-party processes on your system. root:/mnt/cdrw/2005/09/07/host1# ls -l total 157976 -r--r--r-- 1 root root 11206159 Sep 7 02:30 boot.tar.bz2 -r--r--r-- 1 root root 0 Sep 7 02:30 cback.collect -r--r--r-- 1 root root 3199 Sep 7 02:30 dpkg-selections.txt.bz2 -r--r--r-- 1 root root 908325 Sep 7 02:30 etc.tar.bz2 -r--r--r-- 1 root root 389 Sep 7 02:30 fdisk-l.txt.bz2 -r--r--r-- 1 root root 1003100 Sep 7 02:30 ls-laR.txt.bz2 -r--r--r-- 1 root root 19800 Sep 7 02:30 mysqldump.txt.bz2 -r--r--r-- 1 root root 4133372 Sep 7 02:30 opt-local.tar.bz2 -r--r--r-- 1 root root 44794124 Sep 8 23:34 opt-public.tar.bz2 -r--r--r-- 1 root root 30028057 Sep 7 02:30 root.tar.bz2 -r--r--r-- 1 root root 4747070 Sep 7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2 -r--r--r-- 1 root root 603863 Sep 7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2 -r--r--r-- 1 root root 113484 Sep 7 02:30 var-lib-jspwiki.tar.bz2 -r--r--r-- 1 root root 19556660 Sep 7 02:30 var-log.tar.bz2 -r--r--r-- 1 root root 14753855 Sep 7 02:30 var-mail.tar.bz2 As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent. Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before .tar.bz2), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki. The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension. The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the all flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2). Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Recovering Filesystem Data Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before .tar), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration. If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week. Where to extract your backup If you are restoring a home directory or some other non-system directory as part of a full restore, it is probably fine to extract the backup directly into the filesystem. If you are restoring a system directory like /etc as part of a full restore, extracting directly into the filesystem is likely to break things, especially if you re-installed a newer version of your operating system than the one you originally backed up. It's better to extract directories like this to a temporary location and pick out only the files you find you need. When doing a partial restore, I suggest always extracting to a temporary location. Doing it this way gives you more control over what you restore, and helps you avoid compounding your original problem with another one (like overwriting the wrong file, oops). Full Restore To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.) All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location. For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/): root:/# bzcat boot.tar.bz2 | tar xvf - Of course, use zcat or just cat, depending on what kind of compression is in use. If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /. root:/tmp# bzcat boot.tar.bz2 | tar xvf - Again, use zcat or just cat as appropriate. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Partial Restore Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things). The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file — since the same file, if changed frequently, would appear in more than one backup. Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known contact with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place. Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup: root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file Of course, use zcat or just cat, depending on what kind of compression is in use. The tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there. Once you have found your file, extract it using : root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file Again, use zcat or just cat as appropriate. Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Recovering MySQL Data MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup. I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it! MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure. First, find the backup you are interested in. If you have specified all databases in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. If you are restoring an all databases backup, make sure that you have correctly created the root user and know its password. Then, execute: daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root Of course, use zcat or just cat, depending on what kind of compression is in use. Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them. If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database Again, use zcat or just cat as appropriate. For more information on using MySQL, see the documentation on the MySQL web site, , or the manpages for the mysql and mysqldump commands. Recovering Subversion Data Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show. Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is backend-agnostic. root:/tmp# svnadmin create --fs-type=fsfs testrepo Next, load the full backup into the repository: root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Of course, use zcat or just cat, depending on what kind of compression is in use. Follow that with loads for each of the incremental backups: root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Again, use zcat or just cat as appropriate. When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800). Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content. For more information on using Subversion, see the book Version Control with Subversion () or the Subversion FAQ (). Recovering Mailbox Data Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring. Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration. There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date. Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any). Here is an example for a single backed-up file: root:/tmp# rm restore.mbox # make sure it's not left over root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox root:/tmp# grepmail -a -u restore.mbox > nodups.mbox At this point, nodups.mbox contains all of the backed-up messages from /home/user/mail/greylist. Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat. If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case. Recovering Data split by the Split Extension The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback-span command. The split up files are not difficult to work with. Simply find all of the files — which could be split between multiple discs — and concatenate them together. root:/tmp# rm usr-src-software.tar.gz # make sure it's not there root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz Then, use the resulting file like usual. Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include). CedarBackup2-2.26.5/manual/src/extenspec.xml0000664000175000017500000002132612555717015022356 0ustar pronovicpronovic00000000000000 Extension Architecture Interface The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension. You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file. There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this: <extensions> <action> <name>database</name> <module>foo</module> <function>bar</function> <index>101</index> </action> </extensions> In this case, the action database has been mapped to the extension function foo.bar(). Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules: Extensions may not write to stdout or stderr using functions such as print or sys.write. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup2.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled. Any time an extension invokes a command-line utility, it must be done through the CedarBackup2.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output. Extensions may not return any value. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration. Extension functions take three arguments: the path to configuration on disk, a CedarBackup2.cli.Options object representing the command-line options in effect, and a CedarBackup2.config.Config object representing parsed standard configuration. def function(configPath, options, config): """Sample extension function.""" pass This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed. The interface to the CedarBackup2.cli.Options and CedarBackup2.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3). If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions. For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this: <database> <repository>/path/to/repo1</repository> <repository>/path/to/repo2</repository> </database> In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality. CedarBackup2-2.26.5/manual/src/basic.xml0000664000175000017500000011021512555740172021435 0ustar pronovicpronovic00000000000000 Basic Concepts General Architecture Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality. The cback script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback runs setuidSee or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user. The cback script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/cback.conf, but this path can be overridden at runtime. See for more information on how Cedar Backup is configured. You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also . Data Recovery Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in ) can handle the task of restoring their own system, using the standard system tools at hand. If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category. My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need. Cedar Backup Pools There are two kinds of machines in a Cedar Backup pool. One machine (the master) has a CD or DVD writer on it and writes the backup to disc. The others (clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines. Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way. The Backup Process The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control. This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See (later in this chapter) for more information on this subject. A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge. In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order. The cback command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below. See for more information on how a backup run is configured. Flexibility Cedar Backup was designed to be flexible. It allows you to decide for yourself which backup steps you care about executing (and when you execute them), based on your own situation and your own priorities. As an example, I always back up every machine I own. I typically keep 7-10 days of staging directories around, but switch CD/DVD media mostly every week. That way, I can periodically take a disc off-site in case the machine gets stolen or damaged. If you're not worried about these risks, then there's no need to write to disc. In fact, some users prefer to use their master machine as a simple consolidation point. They don't back up any data on the master, and don't write to disc at all. They just use Cedar Backup to handle the mechanics of moving backed-up data to a central location. This isn't quite what Cedar Backup was written to do, but it is flexible enough to meet their needs. The Collect Action The collect action is the first action in a standard backup run. It executes on both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2). There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up. Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file Analagous to .cvsignore in CVS or specify absolute paths or filename patterns In terms of Python regular expressions to be excluded. You can even configure a backup link farm rather than explicitly listing files and directories in configuration. This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a consolidation point to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action). The Stage Action The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name. For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer. Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh. If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running. Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc. Directories collected by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged. The Store Action The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful. If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the option is passed to the cback command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs. This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine. The store action is not supported on the Mac OS X (darwin) platform. On that platform, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. Current Staging Directory The store action tries to be smart about finding the current staging directory. It first checks the current day's staging directory. If that directory exists, and it has not yet been written to disc (i.e. there is no store indicator), then it will be used. Otherwise, the store action will look for an unused staging directory for either the previous day or the next day, in that order. A warning will be written to the log under these circumstances (controlled by the <warn_midnite> configuration value). This behavior varies slightly when the option is in effect. Under these circumstances, any existing store indicator will be ignored. Also, the store action will always attempt to use the current day's staging directory, ignoring any staging directories for the previous day or the next day. This way, running a full store action more than once concurrently will always produce the same results. (You might imagine a use case where a person wants to make several copies of the same full backup.) The Purge Action The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged. Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration. The All Action The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line. Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works. The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions. The Validate Action The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line. The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.). The Initialize Action The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device. However, if the check media store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized. Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with CEDAR BACKUP). Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label). The Rebuild Action The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line. The rebuild action attempts to rebuild this week's disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason. To decide what data to write to disc again, the rebuild action looks back and finds the first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session. The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action. Coordination between Master and Clients Unless you are using Cedar Backup to manage a pool of one, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult — it mostly consists of making sure that operations happen in the right order — but some users are suprised that it is required and want to know why Cedar Backup can't just take care of it for me. Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged. Managed Backups Cedar Backup also supports an optional feature called the managed backup. This feature is intended for use with remote clients where cron is not available. When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell. To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients. Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time. However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature. Media and Device Types Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVD±RW drive. When using a new enough backup device, a new multisession ISO image An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a filesystem-within-a-file and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: . is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images — which is really unusual today — then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the daily backup mode to avoid losing data). Cedar Backup currently supports four different kinds of CD media: cdr-74 74-minute non-rewritable CD media cdrw-74 74-minute rewritable CD media cdr-80 80-minute non-rewritable CD media cdrw-80 80-minute rewritable CD media I have chosen to support just these four types of CD media because they seem to be the most standard of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable. Cedar Backup also supports two kinds of DVD media: dvd+r Single-layer non-rewritable DVD+R media dvd+rw Single-layer rewritable DVD+RW media The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type. Incremental Backups Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis. In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: . for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged. Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week. Extensions Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of collect step. Prior to Cedar Backup version 2, any such integration would have been completely independent of Cedar Backup itself. The external backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration. Starting with version 2, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process (i.e. not collect, stage, store or purge), but can be executed by Cedar Backup when properly configured. Extension authors implement an action process function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback command line like any other action. Hopefully, as the Cedar Backup user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase. Users should see for more information on how extensions are configured, and for details on all of the officially-supported extensions. Developers may be interested in . CedarBackup2-2.26.5/manual/src/commandline.xml0000664000175000017500000013241312555735076022655 0ustar pronovicpronovic00000000000000 Command Line Tools Overview Cedar Backup comes with three command-line programs: cback, cback-amazons3-sync, and cback-span. The cback command is the primary command line interface and the only Cedar Backup program that most users will ever need. The cback-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process. Users who have a lot of data to back up — more than will fit on a single CD or DVD — can use the interactive cback-span tool to split their data between multiple discs. The <command>cback</command> command Introduction Cedar Backup's primary command-line interface is the cback command. It controls the entire backup process. Syntax The cback command has the following syntax: Usage: cback [switches] action(s) The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -c, --config Path to config file (default: /etc/cback.conf) -f, --full Perform a full backup, regardless of configuration -M, --managed Include managed clients when executing actions -N, --managed-only Include ONLY managed clients when executing actions -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit The following actions may be specified: all Take all normal actions (collect, stage, store, purge) collect Take the collect action stage Take the stage action store Take the store action purge Take the purge action rebuild Rebuild "this week's" disc if possible validate Validate configuration only initialize Initialize media for use with Cedar Backup You may also specify extended actions that have been defined in configuration. You must specify at least one action to take. More than one of the "collect", "stage", "store" or "purge" actions and/or extended actions may be specified in any arbitrary order; they will be executed in a sensible order. The "all", "rebuild", "validate", and "initialize" actions may not be combined with other actions. Note that the all action only executes the standard four actions. It never executes any of the configured extensions. Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. Better to be definitive than confusing. Switches , Display usage/help listing. , Display version information. , Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. , Run quietly (display no output to the screen). , Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. , Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started. , Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally. , Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client — but do not execute the action locally. , Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log. , Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. , Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. , Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. , Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the option, as well. , Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. , Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. Actions You can find more information about the various actions in (in ). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions). If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however. The <command>cback-amazons3-sync</command> command Introduction The cback-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process. This might be a good option for some types of data, as long as you understand the limitations around retrieving previous versions of objects that get modified or deleted as part of a sync. S3 does support versioning, but it won't be quite as easy to get at those previous versions as with an explicit incremental backup like cback provides. Cedar Backup does not provide any tooling that would help you retrieve previous versions. The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The aws command will be executed as the same user that is executing the cback-amazons3-sync command, so make sure you configure it as the proper user. (This is different than the amazons3 extension, which is designed to execute as root and switches over to the configured backup user to execute AWS CLI commands.) Syntax The cback-amazons3-sync command has the following syntax: Usage: cback-amazons3-sync [switches] sourceDir s3bucketUrl Cedar Backup Amazon S3 sync tool. This Cedar Backup utility synchronizes a local directory to an Amazon S3 bucket. After the sync is complete, a validation step is taken. An error is reported if the contents of the bucket do not match the source directory, or if the indicated size for any file differs. This tool is a wrapper over the AWS CLI command-line tool. The following arguments are required: sourceDir The local source directory on disk (must exist) s3BucketUrl The URL to the target Amazon S3 bucket The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. aws) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit -v, --verifyOnly Only verify the S3 bucket contents, do not make changes -w, --ignoreWarnings Ignore warnings about problematic filename encodings Typical usage would be something like: cback-amazons3-sync /home/myuser s3://example.com-backup/myuser This will sync the contents of /home/myuser into the indicated bucket. Switches , Display usage/help listing. , Display version information. , Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. , Run quietly (display no output to the screen). , Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log. , Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback-amazons3-sync command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. , Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback-amazons3-sync command is executed, it will retain its existing ownership and mode. , Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. , Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the option, as well. , Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. , Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. , Only verify the S3 bucket contents against the directory on disk. Do not make any changes to the S3 bucket or transfer any files. This is intended as a quick check to see whether the sync is up-to-date. Although no files are transferred, the tool will still execute the source filename encoding check, discussed below along with . , The AWS CLI S3 sync process is very picky about filename encoding. Files that the Linux filesystem handles with no problems can cause problems in S3 if the filename cannot be encoded properly in your configured locale. As of this writing, filenames like this will cause the sync process to abort without transferring all files as expected. To avoid confusion, the cback-amazons3-sync tries to guess which files in the source directory will cause problems, and refuses to execute the AWS CLI S3 sync if any problematic files exist. If you'd rather proceed anyway, use . If problematic files are found, then you have basically two options: either correct your locale (i.e. if you have set LANG=C) or rename the file so it can be encoded properly in your locale. The error messages will tell you the expected encoding (from your locale) and the actual detected encoding for the filename. The <command>cback-span</command> command Introduction Cedar Backup was designed — and is still primarily focused — around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data. However, some users have expressed a need to write these large kinds of backups to disc — if not every day, then at least occassionally. The cback-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback-span to split that data between multiple discs. cback-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs. cback-span accepts many of the same command-line options as cback, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension). In order to use cback-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently. Syntax The cback-span command has the following syntax: Usage: cback-span [switches] Cedar Backup 'span' tool. This Cedar Backup utility spans staged data between multiple discs. It is a utility, not an extension, and requires user interaction. The following switches are accepted, mostly to set up underlying Cedar Backup functionality: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -c, --config Path to config file (default: /etc/cback.conf) -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions Switches , Display usage/help listing. , Display version information. , Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. , Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. , Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log. , Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. , Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. , Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media. , Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the option, as well. , Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. Using <command>cback-span</command> As discussed above, the cback-span is an interactive command. It cannot be run from cron. You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage. The cushion percentage is used by cback-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly. The fit algorithm tells cback-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm. The four available fit algorithms are: worst The worst-fit algorithm. The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing. best The best-fit algorithm. The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms. first The first-fit algorithm. The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting. alternate A hybrid algorithm that I call alternate-fit. This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items. Sample run Below is a log showing a sample cback-span run. ================================================ Cedar Backup 'span' tool ================================================ This the Cedar Backup span tool. It is used to split up staging data when that staging data does not fit onto a single disc. This utility operates using Cedar Backup configuration. Configuration specifies which staging directory to look at and which writer device and media type to use. Continue? [Y/n]: === Cedar Backup store configuration looks like this: Source Directory...: /tmp/staging Media Type.........: cdrw-74 Device Type........: cdwriter Device Path........: /dev/cdrom Device SCSI ID.....: None Drive Speed........: None Check Data Flag....: True No Eject Flag......: False Is this OK? [Y/n]: === Please wait, indexing the source directory (this may take a while)... === The following daily staging directories have not yet been written to disc: /tmp/staging/2007/02/07 /tmp/staging/2007/02/08 /tmp/staging/2007/02/09 /tmp/staging/2007/02/10 /tmp/staging/2007/02/11 /tmp/staging/2007/02/12 /tmp/staging/2007/02/13 /tmp/staging/2007/02/14 The total size of the data in these directories is 1.00 GB. Continue? [Y/n]: === Based on configuration, the capacity of your media is 650.00 MB. Since estimates are not perfect and there is some uncertainly in media capacity calculations, it is good to have a "cushion", a percentage of capacity to set aside. The cushion reduces the capacity of your media, so a 1.5% cushion leaves 98.5% remaining. What cushion percentage? [4.00]: === The real capacity, taking into account the 4.00% cushion, is 627.25 MB. It will take at least 2 disc(s) to store your 1.00 GB of data. Continue? [Y/n]: === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: === Please wait, generating file lists (this may take a while)... === Using the "worst-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 246 files, 615.97 MB, 98.20% utilization Disc 2: 8 files, 412.96 MB, 65.84% utilization Accept this solution? [Y/n]: n === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: alternate === Please wait, generating file lists (this may take a while)... === Using the "alternate-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 73 files, 627.25 MB, 100.00% utilization Disc 2: 181 files, 401.68 MB, 64.04% utilization Accept this solution? [Y/n]: y === Please place the first disc in your backup device. Press return when ready. === Initializing image... Writing image to disc... CedarBackup2-2.26.5/manual/src/securingssh.xml0000664000175000017500000002361412555717043022720 0ustar pronovicpronovic00000000000000 Securing Password-less SSH Connections Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients. Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers. Traditionally, Cedar Backup has relied on a segmenting strategy to minimize the risk. Although the backup typically runs as root — so that all parts of the filesystem can be backed up — we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections. With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user. Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy — they simply may not have a way to create a login which is only used for backups. So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a filter in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd: command="command" Specifies that the command is executed whenever this key is used for authentication. The command supplied by the user (if any) is ignored. The command is run on a pty if the client requests a pty; otherwise it is run without a tty. If an 8-bit clean channel is required, one must not request a pty or should specify no-pty. A quote may be included in the command by quoting it with a backslash. This option might be useful to restrict certain public keys to perform just a specific operation. An example might be a key that permits remote backups but nothing else. Note that the client may specify TCP and/or X11 forwarding unless they are explicitly prohibited. Note that this option applies to shell, command or subsystem execution. Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer. So, let's imagine that we have two hosts: master mickey, and peer minnie. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file): ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9= 1-2341=-a0sd=-sa0=1z= backup@mickey This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie. To put the filter in place, we add a command option to the key, like this: command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp 3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to. A very basic validate-backup script might look something like this: #!/bin/bash if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then ${SSH_ORIGINAL_COMMAND} else echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]." exit 1 fi This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed. For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master). If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this: Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006 debug1: Reading configuration data /home/backup/.ssh/config debug1: Applying options for daystrom debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 Omit the -v and you have your command: scp -f .profile. For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer: scp -f /path/to/collect/cback.collect scp -f /path/to/collect/* scp -t /path/to/collect/cback.stage If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action: /usr/bin/cback --full collect /usr/bin/cback collect Of course, you would have to list the actual path to the cback executable — exactly the one listed in the <cback_command> configuration option for your managed peer. I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions. CedarBackup2-2.26.5/manual/src/depends.xml0000664000175000017500000005342612556154652022013 0ustar pronovicpronovic00000000000000 Dependencies Python 2.7 Cedar Backup is written in Python 2 and requires version 2.7 or greater of the language. Python 2.7 was originally released on 4 Jul 2010, and is the last supported release of Python 2. As of this writing, all current Linux and BSD distributions include it. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. RSH Server and Client Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic rsh client), most users should only use an SSH (secure shell) server and client. The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server. Source URL upstream Debian RPM If you can't find SSH client or server packages for your system, install from the package source, using the upstream link. mkisofs The mkisofs command is used create ISO filesystem images that can later be written to backup media. On Debian platforms, mkisofs is not distributed and genisoimage is used instead. The Debian package takes care of this for you. Source URL upstream RPM If you can't find a package for your system, install from the package source, using the upstream link. cdrecord The cdrecord command is used to write ISO images to CD media in a backup device. On Debian platforms, cdrecord is not distributed and wodim is used instead. The Debian package takes care of this for you. Source URL upstream RPM If you can't find a package for your system, install from the package source, using the upstream link. dvd+rw-tools The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. eject and volname The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc. The volname command is used to determine the volume name of media in a backup device. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. mount and umount The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. grepmail The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. gpg The gpg command is used by the encrypt extension to encrypt files. Source URL upstream Debian RPM If you can't find a package for your system, install from the package source, using the upstream link. split The split command is used by the split extension to split up large files. This command is typically part of the core operating system install and is not distributed in a separate package. AWS CLI AWS CLI is Amazon's official command-line tool for interacting with the Amazon Web Services infrastruture. Cedar Backup uses AWS CLI to copy backup data up to Amazon S3 cloud storage. After you install AWS CLI, you need to configure your connection to AWS with an appropriate access id and access key. Amazon provides a good setup guide. Source URL upstream Debian The initial implementation of the amazons3 extension was written using AWS CLI 1.4. As of this writing, not all Linux distributions include a package for this version. On these platforms, the easiest way to install it is via PIP: apt-get install python-pip, and then pip install awscli. The Debian package includes an appropriate dependency starting with the jessie release. Chardet The cback-amazons3-sync command relies on the Chardet python package to check filename encoding. You only need this package if you are going to use the sync tool. Source URL upstream debian CedarBackup2-2.26.5/manual/src/intro.xml0000664000175000017500000003733512555750034021520 0ustar pronovicpronovic00000000000000 Introduction
Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.— Linus Torvalds, at the release of Linux 2.0.8 in July of 1996.
What is Cedar Backup? Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 2 programming language. There are many different backup software implementations out there in the open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data on a regular basis. Cedar Backup isn't for you if you want to back up your huge MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set of machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, Subversion or Mercurial repositories, or small MySQL databases, then Cedar Backup is probably worth your time. Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 2, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python 2 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images or talking to the Amazon S3 infrastructure. A full list of dependencies is provided in . Migrating from Version 2 to Version 3 The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix-and-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end-of-life in 2020, but you should plan to migrate sooner than that if possible. A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used "cback", version 3 uses "cback3": cback3.conf instead of cback.conf, cback3.log instead of cback.log, etc. So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup. How to Get Support Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see. If you experience a problem, your best bet is to file an issue in the issue tracker at BitBucket. See . When the source code was hosted at SourceForge, there was a mailing list. However, it was very lightly used in the last years before I abandoned SourceForge, and I have decided not to replace it. If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write support@cedar-solutions.com. That mail will go directly to me. If you write the support address about a bug, a scrubbed bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency. Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. See Simon Tatham's excellent bug reporting tutorial: . In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them. Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well. History Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain. In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead. Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. See . At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code). Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) Debian's stable releases are named after characters in the Toy Story movie. and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release. Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code. In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, Epydoc is a Python code documentation tool. See . and updated the code to use the newly-released Python logging package See . after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code. So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result was the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. Tests are implemented using Python's unit test framework. See . The 3.0 release of Cedar Backup is a Python 3 conversion of the 2.0 release, with minimal additional functionality. The conversion from Python 2 to Python 3 started in mid-2015, about 5 years before the anticipated deprecation of Python 2 in 2020. Most users should consider transitioning to the 3.0 release.
CedarBackup2-2.26.5/manual/src/preface.xml0000664000175000017500000002125512555736057021775 0ustar pronovicpronovic00000000000000 Preface Purpose This software manual has been written to document version 2 of Cedar Backup, originally released in early 2005. Audience This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces. Conventions Used in This Book This section covers the various conventions used in this manual. Typographic Conventions Term Used for first use of important terms. Command Used for commands, command output, and switches Replaceable Used for replaceable items in code and text Filenames Used for file and directory names Icons This icon designates a note relating to the surrounding text. This icon designates a helpful tip relating to the surrounding text. This icon designates a warning relating to the surrounding text. Organization of This Manual Provides some some general history about Cedar Backup, what needs it is intended to meet, how to get support, and how to migrate from version 2 to version 3. Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual. Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package. Discusses the various Cedar Backup command-line tools, including the primary cback command. Provides detailed information about how to configure Cedar Backup. Describes each of the officially-supported Cedar Backup extensions. Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup. Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems. Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from. Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised. Acknowledgments The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license. CedarBackup2-2.26.5/manual/src/copyright.xml0000664000175000017500000004245312555747373022406 0ustar pronovicpronovic00000000000000 Copyright Copyright (c) 2004-2011,2013-2015 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA ==================================================================== GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS ==================================================================== CedarBackup2-2.26.5/manual/src/images/0002775000175000017500000000000012642035650021074 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/manual/src/images/html/0002775000175000017500000000000012642035650022040 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/manual/src/images/html/note.png0000664000175000017500000000317212555052642023517 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @QAb @QAϛR*Aw0E |x7EeukWmxV`Io$@Q `Lʔ*q"FaSxt%n rO2[22 2 K&Éc0@ `y$:CGW25MJr +É^2@ Pyuފ @7Wn4IODMqknzŸq. ?4='1=)'AaM7] i1 vRiGJ7JzzABz N7/3Y2tVnBNOi21q@D8tM7AJO'"ߏ 0l ˡw>W Ci6(.ߝ!ć{#Datׯ ,%]I68(<G_O -y!.{3 7e1@Kk`N7@'$HNO@Sk.p9$  ux.8=e3h3&"=A&5!ěS{},pd@ˀrH JO'HP򦰸 WADB.NO<I7"7 i}{`tL4=)bIOt .o n' "yj (Ptd@A)zHYD8=M9,<;;\ 2 Al$Mj>Ό.z nSh'"TyHJ7 (~׍3e3ph- 0sk+ۙ᫧DoMx)IOh 9_ )u!YIENDB`CedarBackup2-2.26.5/manual/src/images/html/warning.png0000664000175000017500000000301612555052642024214 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :93g-YrjR(8YT͎="AD>]x=7nV֝ cCx:w_I uЗW/EGt..0vQ-3gHua?vBv]v8 8ݻBBs 2OHYo#@sǓ'o) ?}c)@tS kdxWn !v Mp@aq?.R+LLAA⎦M||ô ͽ)";ǯ_9 t9"D{00lCKL/WagO>oFv@8Fn6zHEr@!۷# T/x?}; (01AkjZ@;s B8ؾGUU&&^MH JJ:me sr.3 B8n}=$}p-9?-20c`+*;D :|;A@AOfKכJCB|  .GFA3f!͛)BrPTԀ8NO] ӗ.Ah Ap͂D.KI͛\] z8a$@ MO >|d'Zt-3;v76|HI}a2/ojZګ7o  }}\\~-"; ,ĪZZ~:nR۟qvQ;Wm,tt\ w@8),g`XsA-ee@10d)(dUd@@MXdmb*d~qĄnvPZTJʓǏ@XA/% @Ӗ50w9mܴ vs>a tss;XY  0Z 6rs>}i5@l;w㛊atMV@LEXpWEDledKSf1OtqY @a > u[Y%ee]8DxHӯ_Νl.hZ v΍>o@5 ~?~R^D96Ap `Z^əhoilŊ/_ H~={: Z河7ё{u*_xkbNάsܹC-D߿/޹|ƺd'x S3X3@kk_7={ eգO^ͻvmٽuĉ;}BꮓhuРs@OoCIENDB`CedarBackup2-2.26.5/manual/src/images/html/info.png0000664000175000017500000000443312555052642023506 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :߿~ @Aǻw˗/߿O`d^TŋgϞm~@@e@@-@@D9Q``M\o۶=8jms`tv+v=yo߾B?@@v0eHB7o~:ZP^Qe f30DB@|1偁{nPqJ:"7o@Ç_, DZ0aK/)(8GF'5 0w-i0w,@Bs448ceH@ų!<~ɓ'x@8t۷o!쪪`:?o(/YpBS8D0߿^ `d~?(+.>I'O"kVln\ǀݽ{̱Z @Xvn pt (x<=wcj8 ւh,ܹs L:O>dᄒ8 +&fe f*,AzsFF1cѣ/z]^~5^]ƞ= g^x⠿Sܟ?@wsu \BSښ}_Gtvށ-YAAƺuEAՁ={!3T&"\\|l9Kв/N\1Mf-j֯III  $Ϟ}>!?`χw" `҂r=;""J[J_-j9sfcc#: M3`]la;19Ͷrrr,Y7 Pj{`u<:!J4gP|]zʕ+pA*1&$K(+Twwwgdd  5`2/ $RS#YCnnnC6 L>FO78޽Ϩ7 *}6 kх VcM _]}- drبXl11-Č,@c*O>mkk LJE  ,g+x[ n<R__Ns$%W ^s6P%@ڲ1x o|{f˯@Vξ=@7A CX mZŵo1//Nól@p״ڵ o8ء޿?$= “0%y\ -%?8 t-ZLX- C>|<3yY)y1.Np )3~Ν;22 #hXۺuM A.~ì&&sp,# 9TTQㇹs:::Ĝ8quD#L6mƍ^!:P͛n! E  HFߖ-[mۮ]ûІ/~;TWWwa`K-D85,e˖֦AAAPXY֬Y)7 w9rdՋaؐٳgMR" РA TmGIENDB`CedarBackup2-2.26.5/manual/src/extensions.xml0000664000175000017500000023110412560015270022542 0ustar pronovicpronovic00000000000000 Official Extensions System Information Extension The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a broken system. It is intended to be run either immediately before or immediately after the standard collect action. This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2. Currently-installed Debian packages via dpkg --get-selections Disk partition information via fdisk -l System-wide mounted filesystem contents, via ls -laR The Debian-specific information is only collected on systems where /usr/bin/dpkg exists. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>sysinfo</name> <module>CedarBackup2.extend.sysinfo</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own. Amazon S3 Extension The Amazon S3 extension writes data to Amazon S3 cloud storage rather than to physical media. It is intended to replace the store action, but you can also use it alongside the store action if you'd prefer to backup your data in more than one place. This extension must be run after the stage action. The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to run the aws program. So, make sure you configure the AWS CLI tools as the backup user and not root. (This is different than the amazons3 sync tool extension, which executes AWS CLI command as the same user that is running the tool.) When using physical media via the standard store action, there is an implicit limit to the size of a backup, since a backup must fit on a single disc. Since there is no physical media, no such limit exists for Amazon S3 backups. This leaves open the possibility that Cedar Backup might construct an unexpectedly-large backup that the administrator is not aware of. Over time, this might become expensive, either in terms of network bandwidth or in terms of Amazon S3 storage and I/O charges. To mitigate this risk, set a reasonable maximum size using the configuration elements shown below. If the backup fails, you have a chance to review what made the backup larger than you expected, and you can either correct the problem (i.e. remove a large temporary directory that got inadvertently included in the backup) or change configuration to take into account the new "normal" maximum size. You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and ${output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user. For instance, you can use something like this with GPG: /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input} The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.: dd if=/dev/urandom count=20 bs=1 | xxd -ps (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>amazons3</name> <module>CedarBackup2.extend.amazons3</module> <function>executeAction</function> <index>201</index> <!-- just after stage --> </action> </extensions> This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own amazons3 configuration section. This is an example configuration section with encryption disabled: <amazons3> <s3_bucket>example.com-backup/staging</s3_bucket> </amazons3> The following elements are part of the Amazon S3 configuration section: warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the Amazon S3 operation has to cross a midnite boundary in order to find data to write to the cloud. For instance, a warning would be generated if valid data was only found in the day before or day after the current day. Configuration for some users is such that the amazons3 operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). s3_bucket The name of the Amazon S3 bucket that data will be written to. This field configures the S3 bucket that your data will be written to. In S3, buckets are named globally. For uniqueness, you would typically use the name of your domain followed by some suffix, such as example.com-backup. If you want, you can specify a subdirectory within the bucket, such as example.com-backup/staging. Restrictions: Must be non-empty. encrypt Command used to encrypt backup data before upload to S3 If this field is provided, then data will be encrypted before it is uploaded to Amazon S3. You must provide the entire command used to encrypt a file, including the ${input} and ${output} variables. An example GPG command is shown above, but you can use any mechanism you choose. The command will be run as the configured backup user. Restrictions: If provided, must be non-empty. full_size_limit Maximum size of a full backup If this field is provided, then a size limit will be applied to full backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a value as described above, greater than zero. incr_size_limit Maximum size of an incremental backup If this field is provided, then a size limit will be applied to incremental backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a value as described above, greater than zero. Subversion Extension The Subversion Extension is a Cedar Backup extension used to back up Subversion See version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode. It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>subversion</name> <module>CedarBackup2.extend.subversion</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section: <subversion> <collect_mode>incr</collect_mode> <compress_mode>bzip2</compress_mode> <repository> <abs_path>/opt/public/svn/docs</abs_path> </repository> <repository> <abs_path>/opt/public/svn/web</abs_path> <compress_mode>gzip</compress_mode> </repository> <repository_dir> <abs_path>/opt/private/svn</abs_path> <collect_mode>daily</collect_mode> </repository_dir> </subversion> The following elements are part of the Subversion configuration section: collect_mode Default collect mode. The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see ). This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. repository A Subversion repository be collected. This is a subsection which contains information about a specific Subversion repository to be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. repository_dir A Subversion parent repository directory be collected. This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository_dir subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty MySQL Extension The MySQL Extension is a Cedar Backup extension used to back up MySQL See databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another. The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that all configured databases can be backed up by a single user. Often, the root database user will be used. An alternative is to create a separate MySQL backup user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice. The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line and switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf: [mysqldump] user = root password = <secret> Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead. As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server: [mysqldump] host = remote.host For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done. Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>mysql</name> <module>CedarBackup2.extend.mysql</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section: <mysql> <compress_mode>bzip2</compress_mode> <all>Y</all> </mysql> If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration: <mysql> <user>root</user> <password>password</password> <compress_mode>bzip2</compress_mode> <all>Y</all> </mysql> The following elements are part of the MySQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user). This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. password Password associated with the database user. This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. compress_mode Compress mode. MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. PostgreSQL Extension Community-contributed Extension This is a community-contributed extension provided by Antoine Beaupre ("The Anarcat"). I have added regression tests around the configuration parsing code and I will maintain this section in the user manual based on his source code documentation. Unfortunately, I don't have any PostgreSQL databases with which to test the functional code. While I have code-reviewed the code and it looks both sensible and safe, I have to rely on the author to ensure that it works properly. The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL See databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file. This extension always produces a full backup. There is currently no facility for making incremental backups. Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>postgresql</name> <module>CedarBackup2.extend.postgresql</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section: <postgresql> <compress_mode>bzip2</compress_mode> <user>username</user> <all>Y</all> </postgresql> If you decide to back up specific databases, then you would list them individually, like this: <postgresql> <compress_mode>bzip2</compress_mode> <user>username</user> <all>N</all> <database>db1</database> <database>db2</database> </postgresql> The following elements are part of the PostgreSQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. This value is optional. Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf. Restrictions: If provided, must be non-empty. compress_mode Compress mode. PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. Mbox Extension The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style mbox mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders. What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space. Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>mbox</name> <module>CedarBackup2.extend.mbox</module> <function>executeAction</function> <index>99</index> </action> </extensions> This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section: <mbox> <collect_mode>incr</collect_mode> <compress_mode>gzip</compress_mode> <file> <abs_path>/home/user1/mail/greylist</abs_path> <collect_mode>daily</collect_mode> </file> <dir> <abs_path>/home/user2/mail</abs_path> </dir> <dir> <abs_path>/home/user3/mail</abs_path> <exclude> <rel_path>spam</rel_path> <pattern>.*debian.*</pattern> </exclude> </dir> </mbox> Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively. Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed — only relative path exclusions and patterns. The following elements are part of the mbox configuration section: collect_mode Default collect mode. The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see ). This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. file An individual mbox file to be collected. This is a subsection which contains information about an individual mbox file to be backed up. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The file subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox file to back up. Restrictions: Must be an absolute path. dir An mbox directory to be collected. This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The dir subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox directory to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/user2/mail/SPAM. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Encrypt Extension The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc. There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced. Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL. If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe — someplace other than on your backup disc. If you lose your secret key, your backup will be useless. I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc. Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.) An encrypted backup has the same file structure as a normal backup, so all of the instructions in apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual. Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at and gain an understanding of how encryption can help you or hurt you. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>encrypt</name> <module>CedarBackup2.extend.encrypt</module> <function>executeAction</function> <index>301</index> </action> </extensions> This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section: <encrypt> <encrypt_mode>gpg</encrypt_mode> <encrypt_target>Backup User</encrypt_target> </encrypt> The following elements are part of the Encrypt configuration section: encrypt_mode Encryption mode. This value specifies which encryption mechanism will be used by the extension. Currently, only the GPG public-key encryption mechanism is supported. Restrictions: Must be gpg. encrypt_target Encryption target. The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r. Split Extension The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback-span command, which requires individual files within staging directories to each be smaller than a single disc. You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback-span. The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats. Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback-span might put an indivdual file on any disc in a set — the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>split</name> <module>CedarBackup2.extend.split</module> <function>executeAction</function> <index>299</index> </action> </extensions> This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section: <split> <size_limit>250 MB</size_limit> <split_size>100 MB</split_size> </split> The following elements are part of the Split configuration section: size_limit Size limit. Files with a size strictly larger than this limit will be split by the extension. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a size as described above. split_size Split size. This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. Restrictions: Must be a size as described above. Capacity Extension The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused. This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced. To enable this extension, add the following section to the Cedar Backup configuration file: <extensions> <action> <name>capacity</name> <module>CedarBackup2.extend.capacity</module> <function>executeAction</function> <index>299</index> </action> </extensions> This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full: <capacity> <max_percentage>95.5</max_percentage> </capacity> This example configures the extension to warn if the media has fewer than 16 MB free: <capacity> <min_bytes>16 MB</min_bytes> </capacity> The following elements are part of the Capacity configuration section: max_percentage Maximum percentage of the media that may be utilized. You must provide either this value or the min_bytes value. Restrictions: Must be a floating point number between 0.0 and 100.0 min_bytes Minimum number of free bytes that must be available. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are 10240, 250 MB or 1.1 GB. You must provide either this value or the max_percentage value. Restrictions: Must be a byte quantity as described above. CedarBackup2-2.26.5/setup.py0000775000175000017500000000534412560016766017312 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Python distutils setup script # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # pylint: disable=C0111,E0611,F0401 ######################################################################## # Imported modules ######################################################################## from distutils.core import setup from CedarBackup2.release import AUTHOR, EMAIL, VERSION, COPYRIGHT, URL ######################################################################## # Setup configuration ######################################################################## LONG_DESCRIPTION = """ Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. """ setup ( name = 'CedarBackup2', version = VERSION, description = 'Implements local and remote backups to CD/DVD media.', long_description = LONG_DESCRIPTION, keywords = ('local', 'remote', 'backup', 'scp', 'CD-R', 'CD-RW', 'DVD+R', 'DVD+RW',), author = AUTHOR, author_email = EMAIL, url = URL, license = "Copyright (c) %s %s. Licensed under the GNU GPL." % (COPYRIGHT, AUTHOR), platforms = ('Any',), packages = ['CedarBackup2', 'CedarBackup2.actions', 'CedarBackup2.extend', 'CedarBackup2.tools', 'CedarBackup2.writers', ], scripts = ['cback', 'util/cback-span', 'util/cback-amazons3-sync', ], ) CedarBackup2-2.26.5/util/0002775000175000017500000000000012642035650016540 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/util/docbook/0002775000175000017500000000000012642035650020160 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/util/docbook/chunk-stylesheet.xsl0000664000175000017500000000423312555065535024217 0ustar pronovicpronovic00000000000000 styles.css 3 0 CedarBackup2-2.26.5/util/docbook/styles.css0000664000175000017500000000664712555065537022242 0ustar pronovicpronovic00000000000000/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * C E D A R * S O L U T I O N S "Software done right." * S O F T W A R E * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Author : Kenneth J. Pronovici * Language : XSLT * Project : Cedar Backup, release 2 * Purpose : Custom stylesheet applied to user manual in HTML form. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ /* This stylesheet was originally taken from the Subversion project's book (http://svnbook.red-bean.com/). I have not made any modifications to the sheet for use with Cedar Backup. The original stylesheet was (c) 2000-2004 CollabNet (see CREDITS). */ BODY { background: white; margin: 0.5in; font-family: arial,helvetica,sans-serif; } H1.title { font-size: 250%; font-style: normal; font-weight: bold; color: black; } H2.subtitle { font-size: 150%; font-style: italic; color: black; } H2.title { font-size: 150%; font-style: normal; font-weight: bold; color: black; } H3.title { font-size: 125%; font-style: normal; font-weight: bold; color: black; } H4.title { font-size: 100%; font-style: normal; font-weight: bold; color: black; } .toc B { font-size: 125%; font-style: normal; font-weight: bold; color: black; } P,LI,UL,OL,DD,DT { font-style: normal; font-weight: normal; color: black; } TT,PRE { font-family: courier new,courier,fixed; } .command, .screen, .programlisting { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; } .filename { font-family: arial,helvetica,sans-serif; font-style: italic; } A { color: blue; text-decoration: underline; } A:hover { background: rgb(75%,75%,100%); color: blue; text-decoration: underline; } A:visited { color: purple; text-decoration: underline; } IMG { border: none; } .figure, .example, .table { margin: 0.125in 0.5in; } .table TABLE { border: 1px rgb(180,180,200) solid; border-spacing: 0px; } .table TD { border: 1px rgb(180,180,200) solid; } .table TH { background: rgb(180,180,200); border: 1px rgb(180,180,200) solid; } .table P.title, .figure P.title, .example P.title { text-align: left !important; font-size: 100% !important; } .author { font-size: 100%; font-style: italic; font-weight: normal; color: black; } .sidebar { border: 2px black solid; background: rgb(230,230,235); padding: 0.12in; margin: 0 0.5in; } .sidebar P.title { text-align: center; font-size: 125%; } .tip { border: black solid 1px; background: url(./images/info.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .warning { border: black solid 1px; background: url(./images/warning.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .note { border: black solid 1px; background: url(./images/note.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .programlisting, .screen { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; font-size: 90%; color: black; margin: 0 0.5in; } .navheader, .navfooter { border: black solid 1px; background: rgb(180,180,200); } .navheader HR, .navfooter HR { display: none; } CedarBackup2-2.26.5/util/docbook/dblite.dtd0000664000175000017500000005060312555065542022130 0ustar pronovicpronovic00000000000000 %db; CedarBackup2-2.26.5/util/docbook/html-stylesheet.xsl0000664000175000017500000000424312555065544024054 0ustar pronovicpronovic00000000000000 styles.css 3 0 CedarBackup2-2.26.5/util/cback-span0000775000175000017500000000151312556156051020471 0ustar pronovicpronovic00000000000000#!/usr/bin/python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Revision : $Id: cback 605 2005-02-25 00:51:07Z pronovic $ # Purpose : Implements Cedar Backup cback-span script. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Implements Cedar Backup cback-span script. @author: Kenneth J. Pronovici """ import sys from CedarBackup2.tools.span import cli result = cli() sys.exit(result) CedarBackup2-2.26.5/util/knapsackdemo.py0000775000175000017500000001352512560016766021567 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Demo the knapsack functionality in knapsack.py # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Demo the knapsack functionality in knapsack.py. This is a little test program that shows how the various knapsack algorithms work. Use 'python knapsackdemo.py' to run the program. The usage is:: Usage: knapsackdemo.py dir capacity Tests various knapsack (fit) algorithms on dir, using capacity (in MB) as the target fill point. You'll get a good feel for how it works using something like this:: python knapsackdemo.py /usr/bin 35 The output should look fine on an 80-column display. On my Duron 850 with 784MB of RAM (Linux 2.6, Python 2.3), this runs in 0.360 seconds of elapsed time (neglecting the time required to build the list of files to fit). A bigger, nastier test is to build a 650 MB list out of / or /usr. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules and constants ######################################################################## import sys import os import time from CedarBackup2.filesystem import BackupFileList from CedarBackup2.knapsack import firstFit, bestFit, worstFit, alternateFit BYTES_PER_KBYTE = 1024.0 KBYTES_PER_MBYTE = 1024.0 BYTES_PER_MBYTE = BYTES_PER_KBYTE * KBYTES_PER_MBYTE ################## # main() function ################## def main(): """Main routine.""" # Check arguments if len(sys.argv) != 3: print "Usage: %s dir capacity" % sys.argv[0] print "Tests various knapsack (fit) algorithms on dir, using" print "capacity (in MB) as the target fill point." sys.exit(1) searchDir = sys.argv[1] capacity = float(sys.argv[2]) # Print a starting banner print "" print "==============================================================" print "KNAPSACK TEST PROGRAM" print "==============================================================" print "" print "This program tests various knapsack (fit) algorithms using" print "a list of files gathered from a directory. The algorithms" print "attempt to fit the files into a finite sized \"disc\"." print "" print "Each algorithm runs on a list with the same contents, although" print "the actual function calls are provided with a copy of the" print "original list, so they may use their list destructively." print "" print "==============================================================" print "" # Get information about the search directory start = time.time() start = time.time() files = BackupFileList() files.addDirContents(searchDir) size = files.totalSize() size /= BYTES_PER_MBYTE end = time.time() # Generate a table mapping file to size as needed by the knapsack algorithms table = { } for entry in files: if os.path.islink(entry): table[entry] = (entry, 0.0) elif os.path.isfile(entry): table[entry] = (entry, float(os.stat(entry).st_size)) # Print some status information about what we're doing print "Note: desired capacity is %.2f MB." % capacity print "The search path, %s, contains about %.2f MB in %d files." % (searchDir, size, len(files)) print "Gathering this information took about %.3f seconds." % (end - start) print "" # Define the list of tests # (These are function pointers, essentially.) tests = { 'FIRST FIT': firstFit, ' BEST FIT': bestFit, 'WORST FIT': worstFit, ' ALT FIT': alternateFit } # Run each test totalElapsed = 0.0 for key in tests.keys(): # Run and time the test start = time.time() (items, used) = tests[key](table.copy(), capacity*BYTES_PER_MBYTE) end = time.time() count = len(items) # Calculate derived values countPercent = (float(count)/float(len(files))) * 100.0 usedPercent = (float(used)/(float(capacity)*BYTES_PER_MBYTE)) * 100.0 elapsed = end - start totalElapsed += elapsed # Display the results print "%s: %5d files (%6.2f%%), %6.2f MB (%6.2f%%), elapsed: %8.5f sec" % ( key, count, countPercent, used/BYTES_PER_MBYTE, usedPercent, elapsed) # And, print the total elapsed time print "\nTotal elapsed processing time was about %.3f seconds." % totalElapsed ######################################################################## # Module entry point ######################################################################## # Run the main routine if the module is executed rather than sourced if __name__ == '__main__': main() CedarBackup2-2.26.5/util/test.py0000775000175000017500000002444612560016766020112 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2014 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Run all of the unit tests for the project. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Notes ######################################################################## """ Run the CedarBackup2 unit tests. This script runs all of the unit tests at once so we can get one big success or failure result, rather than 20 different smaller results that we somehow have to aggregate together to get the "big picture". This is done by creating and running one big unit test suite based on the suites in the individual unit test modules. The composite suite is always run using the TextTestRunner at verbosity level 1, which prints one dot (".") on the screen for each test run. This output is the same as one would get when using unittest.main() in an individual test. Generally, I'm trying to keep all of the "special" validation logic (i.e. did we find the right Python, did we find the right libraries, etc.) in this code rather than in the individual unit tests so they're more focused on what to test than how their environment should be configured. We want to make sure the tests use the modules in the current source tree, not any versions previously-installed elsewhere, if possible. We don't actually import the modules here, but we warn if the wrong ones would be found. We also want to make sure we are running the correct 'test' package - not one found elsewhere on the user's path - since 'test' could be a relatively common name for a package. Most people will want to run the script with no arguments. This will result in a "reduced feature set" test suite that covers all of the available test suites, but executes only those tests with no surprising system, kernel or network dependencies. If "full" is specified as one of the command-line arguments, then all of the unit tests will be run, including those that require a specialized environment. For instance, some tests require remote connectivity, a loopback filesystem, etc. Other arguments on the command line are assumed to be named tests, so for instance passing "config" runs only the tests for config.py. Any number of individual tests may be listed on the command line, and unknown values will simply be ignored. @author: Kenneth J. Pronovici """ ######################################################################## # Imported modules ######################################################################## import sys import os import logging import unittest ################## # main() function ################## def main(): """ Main routine for program. @return: Integer 0 upon success, integer 1 upon failure. """ # Check the Python version. We require 2.7 or greater. try: if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 7]: print "Python 2 version 2.7 or greater required, sorry." return 1 except: # sys.version_info isn't available before 2.0 print "Python 2 version 2.7 or greater required, sorry." return 1 # Check for the correct CedarBackup2 location and import utilities try: if os.path.exists(os.path.join(".", "CedarBackup2", "filesystem.py")): sys.path.insert(0, ".") elif os.path.basename(os.getcwd()) == "testcase" and os.path.exists(os.path.join("..", "CedarBackup2", "filesystem.py")): sys.path.insert(0, "..") else: print "WARNING: CedarBackup2 modules were not found in the expected" print "location. If the import succeeds, you may be using an" print "unexpected version of CedarBackup2." print "" from CedarBackup2.util import nullDevice, Diagnostics except ImportError, e: print "Failed to import CedarBackup2 util module: %s" % e print "You must either run the unit tests from the CedarBackup2 source" print "tree, or properly set the PYTHONPATH enviroment variable." return 1 # Setup platform-specific command overrides from CedarBackup2.testutil import setupOverrides setupOverrides() # Import the unit test modules try: if os.path.exists(os.path.join(".", "testcase", "filesystemtests.py")): sys.path.insert(0, ".") elif os.path.basename(os.getcwd()) == "testcase" and os.path.exists(os.path.join("..", "testcase", "filesystemtests.py")): sys.path.insert(0, "..") else: print "WARNING: CedarBackup2 unit test modules were not found in" print "the expected location. If the import succeeds, you may be" print "using an unexpected version of the test suite." print "" from testcase import utiltests from testcase import knapsacktests from testcase import filesystemtests from testcase import peertests from testcase import actionsutiltests from testcase import writersutiltests from testcase import cdwritertests from testcase import dvdwritertests from testcase import configtests from testcase import clitests from testcase import mysqltests from testcase import postgresqltests from testcase import subversiontests from testcase import mboxtests from testcase import encrypttests from testcase import amazons3tests from testcase import splittests from testcase import spantests from testcase import synctests from testcase import capacitytests from testcase import customizetests except ImportError, e: print "Failed to import CedarBackup2 unit test module: %s" % e print "You must either run the unit tests from the CedarBackup2 source" print "tree, or properly set the PYTHONPATH enviroment variable." return 1 # Set up logging to discard everything devnull = nullDevice() handler = logging.FileHandler(filename=devnull) handler.setLevel(logging.NOTSET) logger = logging.getLogger("CedarBackup2") logger.setLevel(logging.NOTSET) logger.addHandler(handler) # Get a list of program arguments args = sys.argv[1:] # Set flags in the environment to control tests if "full" in args: full = True os.environ["PEERTESTS_FULL"] = "Y" os.environ["WRITERSUTILTESTS_FULL"] = "Y" os.environ["ENCRYPTTESTS_FULL"] = "Y" os.environ["SPLITTESTS_FULL"] = "Y" args.remove("full") # remainder of list will be specific tests to run, if any else: full = False os.environ["PEERTESTS_FULL"] = "N" os.environ["WRITERSUTILTESTS_FULL"] = "N" os.environ["ENCRYPTTESTS_FULL"] = "N" os.environ["SPLITTESTS_FULL"] = "N" # Print a starting banner print "\n*** Running CedarBackup2 unit tests." if not full: print "*** Using reduced feature set suite with minimum system requirements." # Make a list of tests to run unittests = { } if args == [] or "util" in args: unittests["util"] = utiltests.suite() if args == [] or "knapsack" in args: unittests["knapsack"] = knapsacktests.suite() if args == [] or "filesystem" in args: unittests["filesystem"] = filesystemtests.suite() if args == [] or "peer" in args: unittests["peer"] = peertests.suite() if args == [] or "actionsutil" in args: unittests["actionsutil"] = actionsutiltests.suite() if args == [] or "writersutil" in args: unittests["writersutil"] = writersutiltests.suite() if args == [] or "cdwriter" in args: unittests["cdwriter"] = cdwritertests.suite() if args == [] or "dvdwriter" in args: unittests["dvdwriter"] = dvdwritertests.suite() if args == [] or "config" in args: unittests["config"] = configtests.suite() if args == [] or "cli" in args: unittests["cli"] = clitests.suite() if args == [] or "mysql" in args: unittests["mysql"] = mysqltests.suite() if args == [] or "postgresql" in args: unittests["postgresql"] = postgresqltests.suite() if args == [] or "subversion" in args: unittests["subversion"] = subversiontests.suite() if args == [] or "mbox" in args: unittests["mbox"] = mboxtests.suite() if args == [] or "split" in args: unittests["split"] = splittests.suite() if args == [] or "encrypt" in args: unittests["encrypt"] = encrypttests.suite() if args == [] or "amazons3" in args: unittests["amazons3"] = amazons3tests.suite() if args == [] or "span" in args: unittests["span"] = spantests.suite() if args == [] or "sync" in args: unittests["sync"] = synctests.suite() if args == [] or "capacity" in args: unittests["capacity"] = capacitytests.suite() if args == [] or "customize" in args: unittests["customize"] = customizetests.suite() if args != []: print "*** Executing specific tests: %s" % unittests.keys() # Print some diagnostic information print "" Diagnostics().printDiagnostics(prefix="*** ") # Create and run the test suite print "" suite = unittest.TestSuite(unittests.values()) suiteResult = unittest.TextTestRunner(verbosity=1).run(suite) print "" if not suiteResult.wasSuccessful(): return 1 else: return 0 ######################################################################## # Module entry point ######################################################################## # Run the main routine if the module is executed rather than sourced if __name__ == '__main__': result = main() sys.exit(result) CedarBackup2-2.26.5/util/cback-amazons3-sync0000775000175000017500000000144512556156051022241 0ustar pronovicpronovic00000000000000#!/usr/bin/python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements Cedar Backup cback-amazons3-sync script. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Implements Cedar Backup cback-amazons3-sync script. @author: Kenneth J. Pronovici """ import sys from CedarBackup2.tools.amazons3 import cli result = cli() sys.exit(result) CedarBackup2-2.26.5/cback0000775000175000017500000000163212556156051016557 0ustar pronovicpronovic00000000000000#!/usr/bin/python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Implements Cedar Backup cback script. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # """ Implements Cedar Backup cback script. @author: Kenneth J. Pronovici """ try: import sys from CedarBackup2.cli import cli except ImportError, e: print "Failed to import Python modules: %s" % e print "Are you running a proper version of Python?" sys.exit(1) result = cli() sys.exit(result) CedarBackup2-2.26.5/testcase/0002775000175000017500000000000012642035650017376 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/testcase/mboxtests.py0000664000175000017500000024013112560016766022005 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests mbox extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/mbox.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/mbox.py. There are also tests for several of the private methods. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a MBOXTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.mbox import LocalConfig, MboxConfig, MboxFile, MboxDir ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "mbox.conf.1", "mbox.conf.2", "mbox.conf.3", "mbox.conf.4", ] ####################################################################### # Test Case Classes ####################################################################### ##################### # TestMboxFile class ##################### class TestMboxFile(unittest.TestCase): """Tests for the MboxFile class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MboxFile() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.absolutePath) self.failUnlessEqual(None, mboxFile.collectMode) self.failUnlessEqual(None, mboxFile.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ mboxFile = MboxFile("/path/to/it", "daily", "gzip") self.failUnlessEqual("/path/to/it", mboxFile.absolutePath) self.failUnlessEqual("daily", mboxFile.collectMode) self.failUnlessEqual("gzip", mboxFile.compressMode) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ mboxFile = MboxFile(absolutePath="/path/to/something") self.failUnlessEqual("/path/to/something", mboxFile.absolutePath) mboxFile.absolutePath = None self.failUnlessEqual(None, mboxFile.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.absolutePath) mboxFile.absolutePath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", mboxFile.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.absolutePath) self.failUnlessAssignRaises(ValueError, mboxFile, "absolutePath", "") self.failUnlessEqual(None, mboxFile.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (not absolute). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.absolutePath) self.failUnlessAssignRaises(ValueError, mboxFile, "absolutePath", "relative/path") self.failUnlessEqual(None, mboxFile.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ mboxFile = MboxFile(collectMode="daily") self.failUnlessEqual("daily", mboxFile.collectMode) mboxFile.collectMode = None self.failUnlessEqual(None, mboxFile.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.collectMode) mboxFile.collectMode = "daily" self.failUnlessEqual("daily", mboxFile.collectMode) mboxFile.collectMode = "weekly" self.failUnlessEqual("weekly", mboxFile.collectMode) mboxFile.collectMode = "incr" self.failUnlessEqual("incr", mboxFile.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.collectMode) self.failUnlessAssignRaises(ValueError, mboxFile, "collectMode", "") self.failUnlessEqual(None, mboxFile.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.collectMode) self.failUnlessAssignRaises(ValueError, mboxFile, "collectMode", "monthly") self.failUnlessEqual(None, mboxFile.collectMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, None value. """ mboxFile = MboxFile(compressMode="gzip") self.failUnlessEqual("gzip", mboxFile.compressMode) mboxFile.compressMode = None self.failUnlessEqual(None, mboxFile.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, valid value. """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.compressMode) mboxFile.compressMode = "none" self.failUnlessEqual("none", mboxFile.compressMode) mboxFile.compressMode = "bzip2" self.failUnlessEqual("bzip2", mboxFile.compressMode) mboxFile.compressMode = "gzip" self.failUnlessEqual("gzip", mboxFile.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.compressMode) self.failUnlessAssignRaises(ValueError, mboxFile, "compressMode", "") self.failUnlessEqual(None, mboxFile.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ mboxFile = MboxFile() self.failUnlessEqual(None, mboxFile.compressMode) self.failUnlessAssignRaises(ValueError, mboxFile, "compressMode", "compress") self.failUnlessEqual(None, mboxFile.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mboxFile1 = MboxFile() mboxFile2 = MboxFile() self.failUnlessEqual(mboxFile1, mboxFile2) self.failUnless(mboxFile1 == mboxFile2) self.failUnless(not mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(mboxFile1 >= mboxFile2) self.failUnless(not mboxFile1 != mboxFile2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ mboxFile1 = MboxFile("/path", "daily", "gzip") mboxFile2 = MboxFile("/path", "daily", "gzip") self.failUnlessEqual(mboxFile1, mboxFile2) self.failUnless(mboxFile1 == mboxFile2) self.failUnless(not mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(mboxFile1 >= mboxFile2) self.failUnless(not mboxFile1 != mboxFile2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ mboxFile1 = MboxFile() mboxFile2 = MboxFile(absolutePath="/zippy") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ mboxFile1 = MboxFile("/path", "daily", "gzip") mboxFile2 = MboxFile("/zippy", "daily", "gzip") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ mboxFile1 = MboxFile() mboxFile2 = MboxFile(collectMode="incr") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ mboxFile1 = MboxFile("/path", "daily", "gzip") mboxFile2 = MboxFile("/path", "incr", "gzip") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mboxFile1 = MboxFile() mboxFile2 = MboxFile(compressMode="gzip") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ mboxFile1 = MboxFile("/path", "daily", "bzip2") mboxFile2 = MboxFile("/path", "daily", "gzip") self.failIfEqual(mboxFile1, mboxFile2) self.failUnless(not mboxFile1 == mboxFile2) self.failUnless(mboxFile1 < mboxFile2) self.failUnless(mboxFile1 <= mboxFile2) self.failUnless(not mboxFile1 > mboxFile2) self.failUnless(not mboxFile1 >= mboxFile2) self.failUnless(mboxFile1 != mboxFile2) ##################### # TestMboxDir class ##################### class TestMboxDir(unittest.TestCase): """Tests for the MboxDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MboxDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.absolutePath) self.failUnlessEqual(None, mboxDir.collectMode) self.failUnlessEqual(None, mboxDir.compressMode) self.failUnlessEqual(None, mboxDir.relativeExcludePaths) self.failUnlessEqual(None, mboxDir.excludePatterns) def testConstructor_002(self): """ Test constructor with all values filled in. """ mboxDir = MboxDir("/path/to/it", "daily", "gzip", [ "whatever", ], [ ".*SPAM.*", ] ) self.failUnlessEqual("/path/to/it", mboxDir.absolutePath) self.failUnlessEqual("daily", mboxDir.collectMode) self.failUnlessEqual("gzip", mboxDir.compressMode) self.failUnlessEqual([ "whatever", ], mboxDir.relativeExcludePaths) self.failUnlessEqual([ ".*SPAM.*", ], mboxDir.excludePatterns) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ mboxDir = MboxDir(absolutePath="/path/to/something") self.failUnlessEqual("/path/to/something", mboxDir.absolutePath) mboxDir.absolutePath = None self.failUnlessEqual(None, mboxDir.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.absolutePath) mboxDir.absolutePath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", mboxDir.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.absolutePath) self.failUnlessAssignRaises(ValueError, mboxDir, "absolutePath", "") self.failUnlessEqual(None, mboxDir.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (not absolute). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.absolutePath) self.failUnlessAssignRaises(ValueError, mboxDir, "absolutePath", "relative/path") self.failUnlessEqual(None, mboxDir.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ mboxDir = MboxDir(collectMode="daily") self.failUnlessEqual("daily", mboxDir.collectMode) mboxDir.collectMode = None self.failUnlessEqual(None, mboxDir.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.collectMode) mboxDir.collectMode = "daily" self.failUnlessEqual("daily", mboxDir.collectMode) mboxDir.collectMode = "weekly" self.failUnlessEqual("weekly", mboxDir.collectMode) mboxDir.collectMode = "incr" self.failUnlessEqual("incr", mboxDir.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.collectMode) self.failUnlessAssignRaises(ValueError, mboxDir, "collectMode", "") self.failUnlessEqual(None, mboxDir.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.collectMode) self.failUnlessAssignRaises(ValueError, mboxDir, "collectMode", "monthly") self.failUnlessEqual(None, mboxDir.collectMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, None value. """ mboxDir = MboxDir(compressMode="gzip") self.failUnlessEqual("gzip", mboxDir.compressMode) mboxDir.compressMode = None self.failUnlessEqual(None, mboxDir.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, valid value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.compressMode) mboxDir.compressMode = "none" self.failUnlessEqual("none", mboxDir.compressMode) mboxDir.compressMode = "bzip2" self.failUnlessEqual("bzip2", mboxDir.compressMode) mboxDir.compressMode = "gzip" self.failUnlessEqual("gzip", mboxDir.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.compressMode) self.failUnlessAssignRaises(ValueError, mboxDir, "compressMode", "") self.failUnlessEqual(None, mboxDir.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.compressMode) self.failUnlessAssignRaises(ValueError, mboxDir, "compressMode", "compress") self.failUnlessEqual(None, mboxDir.compressMode) def testConstructor_015(self): """ Test assignment of relativeExcludePaths attribute, None value. """ mboxDir = MboxDir(relativeExcludePaths=[]) self.failUnlessEqual([], mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = None self.failUnlessEqual(None, mboxDir.relativeExcludePaths) def testConstructor_016(self): """ Test assignment of relativeExcludePaths attribute, [] value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = [] self.failUnlessEqual([], mboxDir.relativeExcludePaths) def testConstructor_017(self): """ Test assignment of relativeExcludePaths attribute, single valid entry. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = ["stuff", ] self.failUnlessEqual(["stuff", ], mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths.insert(0, "bogus") self.failUnlessEqual(["bogus", "stuff", ], mboxDir.relativeExcludePaths) def testConstructor_018(self): """ Test assignment of relativeExcludePaths attribute, multiple valid entries. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths = ["bogus", "stuff", ] self.failUnlessEqual(["bogus", "stuff", ], mboxDir.relativeExcludePaths) mboxDir.relativeExcludePaths.append("more") self.failUnlessEqual(["bogus", "stuff", "more", ], mboxDir.relativeExcludePaths) def testConstructor_019(self): """ Test assignment of excludePatterns attribute, None value. """ mboxDir = MboxDir(excludePatterns=[]) self.failUnlessEqual([], mboxDir.excludePatterns) mboxDir.excludePatterns = None self.failUnlessEqual(None, mboxDir.excludePatterns) def testConstructor_020(self): """ Test assignment of excludePatterns attribute, [] value. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) mboxDir.excludePatterns = [] self.failUnlessEqual([], mboxDir.excludePatterns) def testConstructor_021(self): """ Test assignment of excludePatterns attribute, single valid entry. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) mboxDir.excludePatterns = ["valid", ] self.failUnlessEqual(["valid", ], mboxDir.excludePatterns) mboxDir.excludePatterns.append("more") self.failUnlessEqual(["valid", "more", ], mboxDir.excludePatterns) def testConstructor_022(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) mboxDir.excludePatterns = ["valid", "more", ] self.failUnlessEqual(["valid", "more", ], mboxDir.excludePatterns) mboxDir.excludePatterns.insert(1, "bogus") self.failUnlessEqual(["valid", "bogus", "more", ], mboxDir.excludePatterns) def testConstructor_023(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) self.failUnlessAssignRaises(ValueError, mboxDir, "excludePatterns", ["*.jpg", ]) self.failUnlessEqual(None, mboxDir.excludePatterns) def testConstructor_024(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) self.failUnlessAssignRaises(ValueError, mboxDir, "excludePatterns", ["*.jpg", "*" ]) self.failUnlessEqual(None, mboxDir.excludePatterns) def testConstructor_025(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ mboxDir = MboxDir() self.failUnlessEqual(None, mboxDir.excludePatterns) self.failUnlessAssignRaises(ValueError, mboxDir, "excludePatterns", ["*.jpg", "valid" ]) self.failUnlessEqual(None, mboxDir.excludePatterns) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mboxDir1 = MboxDir() mboxDir2 = MboxDir() self.failUnlessEqual(mboxDir1, mboxDir2) self.failUnless(mboxDir1 == mboxDir2) self.failUnless(not mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(mboxDir1 >= mboxDir2) self.failUnless(not mboxDir1 != mboxDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ mboxDir1 = MboxDir("/path", "daily", "gzip") mboxDir2 = MboxDir("/path", "daily", "gzip") self.failUnlessEqual(mboxDir1, mboxDir2) self.failUnless(mboxDir1 == mboxDir2) self.failUnless(not mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(mboxDir1 >= mboxDir2) self.failUnless(not mboxDir1 != mboxDir2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(absolutePath="/zippy") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ mboxDir1 = MboxDir("/path", "daily", "gzip") mboxDir2 = MboxDir("/zippy", "daily", "gzip") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(collectMode="incr") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ mboxDir1 = MboxDir("/path", "daily", "gzip") mboxDir2 = MboxDir("/path", "incr", "gzip") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(compressMode="gzip") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ mboxDir1 = MboxDir("/path", "daily", "bzip2") mboxDir2 = MboxDir("/path", "daily", "gzip") self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_009(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(relativeExcludePaths=[]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_010(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one not empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(relativeExcludePaths=["stuff", "other", ]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_011(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one empty, one not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", ["one", ], []) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", [], []) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(not mboxDir1 < mboxDir2) self.failUnless(not mboxDir1 <= mboxDir2) self.failUnless(mboxDir1 > mboxDir2) self.failUnless(mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_012(self): """ Test comparison of two differing objects, relativeExcludePaths differs (both not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", ["one", ], []) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", ["two", ], []) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_013(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(excludePatterns=[]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_014(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one not empty). """ mboxDir1 = MboxDir() mboxDir2 = MboxDir(excludePatterns=["one", "two", "three", ]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_015(self): """ Test comparison of two differing objects, excludePatterns differs (one empty, one not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", [], []) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", [], ["pattern", ]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) def testComparison_016(self): """ Test comparison of two differing objects, excludePatterns differs (both not empty). """ mboxDir1 = MboxDir("/etc/whatever", "incr", "none", [], ["p1", ]) mboxDir2 = MboxDir("/etc/whatever", "incr", "none", [], ["p2", ]) self.failIfEqual(mboxDir1, mboxDir2) self.failUnless(not mboxDir1 == mboxDir2) self.failUnless(mboxDir1 < mboxDir2) self.failUnless(mboxDir1 <= mboxDir2) self.failUnless(not mboxDir1 > mboxDir2) self.failUnless(not mboxDir1 >= mboxDir2) self.failUnless(mboxDir1 != mboxDir2) ####################### # TestMboxConfig class ####################### class TestMboxConfig(unittest.TestCase): """Tests for the MboxConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MboxConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.collectMode) self.failUnlessEqual(None, mbox.compressMode) self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, mboxFiles=None and mboxDirs=None. """ mbox = MboxConfig("daily", "gzip", None, None) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no mboxFiles, no mboxDirs. """ mbox = MboxConfig("daily", "gzip", [], []) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual([], mbox.mboxFiles) self.failUnlessEqual([], mbox.mboxDirs) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one mboxFile, no mboxDirs. """ mboxFiles = [ MboxFile(), ] mbox = MboxConfig("daily", "gzip", mboxFiles, []) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual(mboxFiles, mbox.mboxFiles) self.failUnlessEqual([], mbox.mboxDirs) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with no mboxFiles, one mboxDir. """ mboxDirs = [ MboxDir(), ] mbox = MboxConfig("daily", "gzip", [], mboxDirs) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual([], mbox.mboxFiles) self.failUnlessEqual(mboxDirs, mbox.mboxDirs) def testConstructor_006(self): """ Test constructor with all values filled in, with valid values, with multiple mboxFiles and mboxDirs. """ mboxFiles = [ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), ] mboxDirs = [ MboxDir(collectMode="weekly"), MboxDir(collectMode="incr"), ] mbox = MboxConfig("daily", "gzip", mboxFiles=mboxFiles, mboxDirs=mboxDirs) self.failUnlessEqual("daily", mbox.collectMode) self.failUnlessEqual("gzip", mbox.compressMode) self.failUnlessEqual(mboxFiles, mbox.mboxFiles) self.failUnlessEqual(mboxDirs, mbox.mboxDirs) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ mbox = MboxConfig(collectMode="daily") self.failUnlessEqual("daily", mbox.collectMode) mbox.collectMode = None self.failUnlessEqual(None, mbox.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.collectMode) mbox.collectMode = "weekly" self.failUnlessEqual("weekly", mbox.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.collectMode) self.failUnlessAssignRaises(ValueError, mbox, "collectMode", "") self.failUnlessEqual(None, mbox.collectMode) def testConstructor_010(self): """ Test assignment of compressMode attribute, None value. """ mbox = MboxConfig(compressMode="gzip") self.failUnlessEqual("gzip", mbox.compressMode) mbox.compressMode = None self.failUnlessEqual(None, mbox.compressMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, valid value. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.compressMode) mbox.compressMode = "bzip2" self.failUnlessEqual("bzip2", mbox.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.compressMode) self.failUnlessAssignRaises(ValueError, mbox, "compressMode", "") self.failUnlessEqual(None, mbox.compressMode) def testConstructor_013(self): """ Test assignment of mboxFiles attribute, None value. """ mbox = MboxConfig(mboxFiles=[]) self.failUnlessEqual([], mbox.mboxFiles) mbox.mboxFiles = None self.failUnlessEqual(None, mbox.mboxFiles) def testConstructor_014(self): """ Test assignment of mboxFiles attribute, [] value. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) mbox.mboxFiles = [] self.failUnlessEqual([], mbox.mboxFiles) def testConstructor_015(self): """ Test assignment of mboxFiles attribute, single valid entry. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) mbox.mboxFiles = [ MboxFile(), ] self.failUnlessEqual([ MboxFile(), ], mbox.mboxFiles) mbox.mboxFiles.append(MboxFile(collectMode="daily")) self.failUnlessEqual([ MboxFile(), MboxFile(collectMode="daily"), ], mbox.mboxFiles) def testConstructor_016(self): """ Test assignment of mboxFiles attribute, multiple valid entries. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) mbox.mboxFiles = [ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), ] self.failUnlessEqual([ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), ], mbox.mboxFiles) mbox.mboxFiles.append(MboxFile(collectMode="incr")) self.failUnlessEqual([ MboxFile(collectMode="daily"), MboxFile(collectMode="weekly"), MboxFile(collectMode="incr"), ], mbox.mboxFiles) def testConstructor_017(self): """ Test assignment of mboxFiles attribute, single invalid entry (None). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessAssignRaises(ValueError, mbox, "mboxFiles", [None, ]) self.failUnlessEqual(None, mbox.mboxFiles) def testConstructor_018(self): """ Test assignment of mboxFiles attribute, single invalid entry (wrong type). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessAssignRaises(ValueError, mbox, "mboxFiles", [MboxDir(), ]) self.failUnlessEqual(None, mbox.mboxFiles) def testConstructor_019(self): """ Test assignment of mboxFiles attribute, mixed valid and invalid entries. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxFiles) self.failUnlessAssignRaises(ValueError, mbox, "mboxFiles", [MboxFile(), MboxDir(), ]) self.failUnlessEqual(None, mbox.mboxFiles) def testConstructor_020(self): """ Test assignment of mboxDirs attribute, None value. """ mbox = MboxConfig(mboxDirs=[]) self.failUnlessEqual([], mbox.mboxDirs) mbox.mboxDirs = None self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_021(self): """ Test assignment of mboxDirs attribute, [] value. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) mbox.mboxDirs = [] self.failUnlessEqual([], mbox.mboxDirs) def testConstructor_022(self): """ Test assignment of mboxDirs attribute, single valid entry. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) mbox.mboxDirs = [ MboxDir(), ] self.failUnlessEqual([ MboxDir(), ], mbox.mboxDirs) mbox.mboxDirs.append(MboxDir(collectMode="daily")) self.failUnlessEqual([ MboxDir(), MboxDir(collectMode="daily"), ], mbox.mboxDirs) def testConstructor_023(self): """ Test assignment of mboxDirs attribute, multiple valid entries. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) mbox.mboxDirs = [ MboxDir(collectMode="daily"), MboxDir(collectMode="weekly"), ] self.failUnlessEqual([ MboxDir(collectMode="daily"), MboxDir(collectMode="weekly"), ], mbox.mboxDirs) mbox.mboxDirs.append(MboxDir(collectMode="incr")) self.failUnlessEqual([ MboxDir(collectMode="daily"), MboxDir(collectMode="weekly"), MboxDir(collectMode="incr"), ], mbox.mboxDirs) def testConstructor_024(self): """ Test assignment of mboxDirs attribute, single invalid entry (None). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) self.failUnlessAssignRaises(ValueError, mbox, "mboxDirs", [None, ]) self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_025(self): """ Test assignment of mboxDirs attribute, single invalid entry (wrong type). """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) self.failUnlessAssignRaises(ValueError, mbox, "mboxDirs", [MboxFile(), ]) self.failUnlessEqual(None, mbox.mboxDirs) def testConstructor_026(self): """ Test assignment of mboxDirs attribute, mixed valid and invalid entries. """ mbox = MboxConfig() self.failUnlessEqual(None, mbox.mboxDirs) self.failUnlessAssignRaises(ValueError, mbox, "mboxDirs", [MboxDir(), MboxFile(), ]) self.failUnlessEqual(None, mbox.mboxDirs) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mbox1 = MboxConfig() mbox2 = MboxConfig() self.failUnlessEqual(mbox1, mbox2) self.failUnless(mbox1 == mbox2) self.failUnless(not mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(mbox1 >= mbox2) self.failUnless(not mbox1 != mbox2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, lists None. """ mbox1 = MboxConfig("daily", "gzip", None, None) mbox2 = MboxConfig("daily", "gzip", None, None) self.failUnlessEqual(mbox1, mbox2) self.failUnless(mbox1 == mbox2) self.failUnless(not mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(mbox1 >= mbox2) self.failUnless(not mbox1 != mbox2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, lists empty. """ mbox1 = MboxConfig("daily", "gzip", [], []) mbox2 = MboxConfig("daily", "gzip", [], []) self.failUnlessEqual(mbox1, mbox2) self.failUnless(mbox1 == mbox2) self.failUnless(not mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(mbox1 >= mbox2) self.failUnless(not mbox1 != mbox2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, lists non-empty. """ mbox1 = MboxConfig("daily", "gzip", [ MboxFile(), ], [MboxDir(), ]) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), ], [MboxDir(), ]) self.failUnlessEqual(mbox1, mbox2) self.failUnless(mbox1 == mbox2) self.failUnless(not mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(mbox1 >= mbox2) self.failUnless(not mbox1 != mbox2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ mbox1 = MboxConfig() mbox2 = MboxConfig(collectMode="daily") self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ mbox1 = MboxConfig("daily", "gzip", [ MboxFile(), ]) mbox2 = MboxConfig("weekly", "gzip", [ MboxFile(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mbox1 = MboxConfig() mbox2 = MboxConfig(compressMode="bzip2") self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ mbox1 = MboxConfig("daily", "bzip2", [ MboxFile(), ]) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_009(self): """ Test comparison of two differing objects, mboxFiles differs (one None, one empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxFiles=[]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_010(self): """ Test comparison of two differing objects, mboxFiles differs (one None, one not empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxFiles=[MboxFile(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_011(self): """ Test comparison of two differing objects, mboxFiles differs (one empty, one not empty). """ mbox1 = MboxConfig("daily", "gzip", [ ], None) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), ], None) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_012(self): """ Test comparison of two differing objects, mboxFiles differs (both not empty). """ mbox1 = MboxConfig("daily", "gzip", [ MboxFile(), ], None) mbox2 = MboxConfig("daily", "gzip", [ MboxFile(), MboxFile(), ], None) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_013(self): """ Test comparison of two differing objects, mboxDirs differs (one None, one empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxDirs=[]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_014(self): """ Test comparison of two differing objects, mboxDirs differs (one None, one not empty). """ mbox1 = MboxConfig() mbox2 = MboxConfig(mboxDirs=[MboxDir(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_015(self): """ Test comparison of two differing objects, mboxDirs differs (one empty, one not empty). """ mbox1 = MboxConfig("daily", "gzip", None, [ ]) mbox2 = MboxConfig("daily", "gzip", None, [ MboxDir(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) def testComparison_016(self): """ Test comparison of two differing objects, mboxDirs differs (both not empty). """ mbox1 = MboxConfig("daily", "gzip", None, [ MboxDir(), ]) mbox2 = MboxConfig("daily", "gzip", None, [ MboxDir(), MboxDir(), ]) self.failIfEqual(mbox1, mbox2) self.failUnless(not mbox1 == mbox2) self.failUnless(mbox1 < mbox2) self.failUnless(mbox1 <= mbox2) self.failUnless(not mbox1 > mbox2) self.failUnless(not mbox1 >= mbox2) self.failUnless(mbox1 != mbox2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the mbox configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.mbox) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.mbox) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["mbox.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of mbox attribute, None value. """ config = LocalConfig() config.mbox = None self.failUnlessEqual(None, config.mbox) def testConstructor_005(self): """ Test assignment of mbox attribute, valid value. """ config = LocalConfig() config.mbox = MboxConfig() self.failUnlessEqual(MboxConfig(), config.mbox) def testConstructor_006(self): """ Test assignment of mbox attribute, invalid value (not MboxConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "mbox", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.mbox = MboxConfig() config2 = LocalConfig() config2.mbox = MboxConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, mbox differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.mbox = MboxConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, mbox differs. """ config1 = LocalConfig() config1.mbox = MboxConfig(collectMode="daily") config2 = LocalConfig() config2.mbox = MboxConfig(collectMode="weekly") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None mbox section. """ config = LocalConfig() config.mbox = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty mbox section. """ config = LocalConfig() config.mbox = MboxConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty mbox section, mboxFiles=None and mboxDirs=None. """ config = LocalConfig() config.mbox = MboxConfig("weekly", "gzip", None, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty mbox section, mboxFiles=[] and mboxDirs=[]. """ config = LocalConfig() config.mbox = MboxConfig("weekly", "gzip", [], []) self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, defaults set, no values on files. """ mboxFiles = [ MboxFile(absolutePath="/one"), MboxFile(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_006(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, defaults set, no values on directories. """ mboxDirs = [ MboxDir(absolutePath="/one"), MboxDir(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_007(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, no defaults set, no values on files. """ mboxFiles = [ MboxFile(absolutePath="/one"), MboxFile(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None self.failUnlessRaises(ValueError, config.validate) def testValidate_008(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, no defaults set, no values on directories. """ mboxDirs = [ MboxDir(absolutePath="/one"), MboxDir(absolutePath="/two") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs self.failUnlessRaises(ValueError, config.validate) def testValidate_009(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, no defaults set, both values on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_010(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, no defaults set, both values on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_011(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, collectMode only on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="weekly") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_012(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, collectMode only on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="weekly") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_013(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, compressMode only on files. """ mboxFiles = [ MboxFile(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "weekly" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_014(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, compressMode only on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "weekly" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_015(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, compressMode default and on files. """ mboxFiles = [ MboxFile(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_016(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, compressMode default and on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_017(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, collectMode default and on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="daily") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_018(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, collectMode default and on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="daily") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() def testValidate_019(self): """ Test validate on a non-empty mbox section, non-empty mboxFiles, collectMode and compressMode default and on files. """ mboxFiles = [ MboxFile(absolutePath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = mboxFiles config.mbox.mboxDirs = None config.validate() def testValidate_020(self): """ Test validate on a non-empty mbox section, non-empty mboxDirs, collectMode and compressMode default and on directories. """ mboxDirs = [ MboxDir(absolutePath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.mbox = MboxConfig() config.mbox.collectMode = "daily" config.mbox.compressMode = "gzip" config.mbox.mboxFiles = None config.mbox.mboxDirs = mboxDirs config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["mbox.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.mbox) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.mbox) def testParse_002(self): """ Parse config document with default modes, one collect file and one collect dir. """ mboxFiles = [ MboxFile(absolutePath="/home/joebob/mail/cedar-backup-users"), ] mboxDirs = [ MboxDir(absolutePath="/home/billiejoe/mail"), ] path = self.resources["mbox.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual("daily", config.mbox.collectMode) self.failUnlessEqual("gzip", config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual("daily", config.mbox.collectMode) self.failUnlessEqual("gzip", config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) def testParse_003(self): """ Parse config document with no default modes, one collect file and one collect dir. """ mboxFiles = [ MboxFile(absolutePath="/home/joebob/mail/cedar-backup-users", collectMode="daily", compressMode="gzip"), ] mboxDirs = [ MboxDir(absolutePath="/home/billiejoe/mail", collectMode="weekly", compressMode="bzip2"), ] path = self.resources["mbox.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual(None, config.mbox.collectMode) self.failUnlessEqual(None, config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual(None, config.mbox.collectMode) self.failUnlessEqual(None, config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) def testParse_004(self): """ Parse config document with default modes, several files with various overrides and exclusions. """ mboxFiles = [] mboxFile = MboxFile(absolutePath="/home/jimbo/mail/cedar-backup-users") mboxFiles.append(mboxFile) mboxFile = MboxFile(absolutePath="/home/joebob/mail/cedar-backup-users", collectMode="daily", compressMode="gzip") mboxFiles.append(mboxFile) mboxDirs = [] mboxDir = MboxDir(absolutePath="/home/frank/mail/cedar-backup-users") mboxDirs.append(mboxDir) mboxDir = MboxDir(absolutePath="/home/jimbob/mail", compressMode="bzip2", relativeExcludePaths=["logomachy-devel"]) mboxDirs.append(mboxDir) mboxDir = MboxDir(absolutePath="/home/billiejoe/mail", collectMode="weekly", compressMode="bzip2", excludePatterns=[".*SPAM.*"]) mboxDirs.append(mboxDir) mboxDir = MboxDir(absolutePath="/home/billybob/mail", relativeExcludePaths=["debian-devel", "debian-python", ], excludePatterns=[".*SPAM.*", ".*JUNK.*", ]) mboxDirs.append(mboxDir) path = self.resources["mbox.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual("incr", config.mbox.collectMode) self.failUnlessEqual("none", config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mbox) self.failUnlessEqual("incr", config.mbox.collectMode) self.failUnlessEqual("none", config.mbox.compressMode) self.failUnlessEqual(mboxFiles, config.mbox.mboxFiles) self.failUnlessEqual(mboxDirs, config.mbox.mboxDirs) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ mbox = MboxConfig() config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_002(self): """ Test with defaults set, single mbox file with no optional values. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_003(self): """ Test with defaults set, single mbox directory with no optional values. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_004(self): """ Test with defaults set, single mbox file with collectMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="incr")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_005(self): """ Test with defaults set, single mbox directory with collectMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="incr")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_006(self): """ Test with defaults set, single mbox file with compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_007(self): """ Test with defaults set, single mbox directory with compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_008(self): """ Test with defaults set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_009(self): """ Test with defaults set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_010(self): """ Test with no defaults set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_011(self): """ Test with no defaults set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly", compressMode="bzip2")) mbox = MboxConfig(mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_012(self): """ Test with compressMode set, single mbox file with collectMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly")) mbox = MboxConfig(compressMode="gzip", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_013(self): """ Test with compressMode set, single mbox directory with collectMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly")) mbox = MboxConfig(compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_014(self): """ Test with collectMode set, single mbox file with compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", compressMode="gzip")) mbox = MboxConfig(collectMode="weekly", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_015(self): """ Test with collectMode set, single mbox directory with compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", compressMode="gzip")) mbox = MboxConfig(collectMode="weekly", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_016(self): """ Test with compressMode set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="incr", compressMode="gzip")) mbox = MboxConfig(compressMode="bzip2", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_017(self): """ Test with compressMode set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="incr", compressMode="gzip")) mbox = MboxConfig(compressMode="bzip2", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_018(self): """ Test with collectMode set, single mbox file with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path", collectMode="weekly", compressMode="gzip")) mbox = MboxConfig(collectMode="incr", mboxFiles=mboxFiles) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_019(self): """ Test with collectMode set, single mbox directory with collectMode and compressMode set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", collectMode="weekly", compressMode="gzip")) mbox = MboxConfig(collectMode="incr", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_020(self): """ Test with defaults set, single mbox directory with relativeExcludePaths set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", relativeExcludePaths=["one", "two", ])) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_021(self): """ Test with defaults set, single mbox directory with excludePatterns set. """ mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path", excludePatterns=["one", "two", ])) mbox = MboxConfig(collectMode="daily", compressMode="gzip", mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) def testAddConfig_022(self): """ Test with defaults set, multiple mbox files and directories with collectMode and compressMode set. """ mboxFiles = [] mboxFiles.append(MboxFile(absolutePath="/path1", collectMode="daily", compressMode="gzip")) mboxFiles.append(MboxFile(absolutePath="/path2", collectMode="weekly", compressMode="gzip")) mboxFiles.append(MboxFile(absolutePath="/path3", collectMode="incr", compressMode="gzip")) mboxDirs = [] mboxDirs.append(MboxDir(absolutePath="/path1", collectMode="daily", compressMode="bzip2")) mboxDirs.append(MboxDir(absolutePath="/path2", collectMode="weekly", compressMode="bzip2")) mboxDirs.append(MboxDir(absolutePath="/path3", collectMode="incr", compressMode="bzip2")) mbox = MboxConfig(collectMode="incr", compressMode="bzip2", mboxFiles=mboxFiles, mboxDirs=mboxDirs) config = LocalConfig() config.mbox = mbox self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestMboxFile, 'test'), unittest.makeSuite(TestMboxDir, 'test'), unittest.makeSuite(TestMboxConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/cdwritertests.py0000664000175000017500000023603012560016766022666 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests CD writer functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/writers/cdwriter.py. This code was consolidated from writertests.py and imagetests.py at the same time cdwriter.py was created. Code Coverage ============= This module contains individual tests for the public classes implemented in cdwriter.py. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to a physical CD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, there aren't any tests below that actually cause CD media to be written to. As a compromise, much of the implementation is in terms of private static methods that have well-defined behaviors. Normally, I prefer to only test the public interface to class, but in this case, testing the private methods will help give us some reasonable confidence in the code, even if we can't write a physical disc or can't run all of the tests. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. There are no special dependencies for these tests. I used to try and run tests against an actual device, to make sure that this worked. However, those tests ended up being kind of bogus, because my main development environment doesn't have a writer, and even if it had one, any device with the same name on another user's system wouldn't necessarily return sensible results. That's just pointless. We'll just have to rely on the other tests to make sure that things seem sensible. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup2.writers.cdwriter import MediaDefinition, MediaCapacity, CdWriter from CedarBackup2.writers.cdwriter import MEDIA_CDR_74, MEDIA_CDRW_74, MEDIA_CDR_80, MEDIA_CDRW_80 ####################################################################### # Module-wide configuration and constants ####################################################################### MB650 = (650.0*1024.0*1024.0) # 650 MB MB700 = (700.0*1024.0*1024.0) # 700 MB ILEAD = (11400.0*2048.0) # Initial lead-in SLEAD = (6900.0*2048.0) # Session lead-in DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree9.tar.gz", ] SUDO_CMD = [ "sudo", ] HDIUTIL_CMD = [ "hdiutil", ] INVALID_FILE = "bogus" # This file name should never exist ####################################################################### # Test Case Classes ####################################################################### ############################ # TestMediaDefinition class ############################ class TestMediaDefinition(unittest.TestCase): """Tests for the MediaDefinition class.""" def testConstructor_001(self): """ Test the constructor with an invalid media type. """ self.failUnlessRaises(ValueError, MediaDefinition, 100) def testConstructor_002(self): """ Test the constructor with the C{MEDIA_CDR_74} media type. """ media = MediaDefinition(MEDIA_CDR_74) self.failUnlessEqual(MEDIA_CDR_74, media.mediaType) self.failUnlessEqual(False, media.rewritable) self.failIfEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.failIfEqual(0, media.leadIn) # just care that it's set, not what its value is self.failUnlessEqual(332800, media.capacity) def testConstructor_003(self): """ Test the constructor with the C{MEDIA_CDRW_74} media type. """ media = MediaDefinition(MEDIA_CDRW_74) self.failUnlessEqual(MEDIA_CDRW_74, media.mediaType) self.failUnlessEqual(True, media.rewritable) self.failIfEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.failIfEqual(0, media.leadIn) # just care that it's set, not what its value is self.failUnlessEqual(332800, media.capacity) def testConstructor_004(self): """ Test the constructor with the C{MEDIA_CDR_80} media type. """ media = MediaDefinition(MEDIA_CDR_80) self.failUnlessEqual(MEDIA_CDR_80, media.mediaType) self.failUnlessEqual(False, media.rewritable) self.failIfEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.failIfEqual(0, media.leadIn) # just care that it's set, not what its value is self.failUnlessEqual(358400, media.capacity) def testConstructor_005(self): """ Test the constructor with the C{MEDIA_CDRW_80} media type. """ media = MediaDefinition(MEDIA_CDRW_80) self.failUnlessEqual(MEDIA_CDRW_80, media.mediaType) self.failUnlessEqual(True, media.rewritable) self.failIfEqual(0, media.initialLeadIn) # just care that it's set, not what its value is self.failIfEqual(0, media.leadIn) # just care that it's set, not what its value is self.failUnlessEqual(358400, media.capacity) ############################ # TestMediaCapacity class ############################ class TestMediaCapacity(unittest.TestCase): """Tests for the MediaCapacity class.""" def testConstructor_001(self): """ Test the constructor. """ capacity = MediaCapacity(100, 200, (300, 400)) self.failUnlessEqual(100, capacity.bytesUsed) self.failUnlessEqual(200, capacity.bytesAvailable) self.failUnlessEqual((300, 400), capacity.boundaries) ##################### # TestCdWriter class ##################### class TestCdWriter(unittest.TestCase): """Tests for the CdWriter class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################### # Test constructor ################### def testConstructor_001(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid non-ATA SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True} """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_002(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid ATA SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="ATA:0,0,0", unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("ATA:0,0,0", writer.scsiId) self.failUnlessEqual("ATA:0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_003(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid ATAPI SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="ATAPI:0,0,0", unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("ATAPI:0,0,0", writer.scsiId) self.failUnlessEqual("ATAPI:0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_004(self): """ Test the constructor with device C{/dev/null} (which is writable and exists). Use an invalid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="blech", unittest=False) def testConstructor_005(self): """ Test the constructor with device C{/dev/null} (which is writable and exists). Use an invalid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="blech", unittest=True) def testConstructor_006(self): """ Test the constructor with a non-absolute device path. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="dev/null", scsiId="0,0,0", unittest=False) def testConstructor_007(self): """ Test the constructor with a non-absolute device path. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ self.failUnlessRaises(ValueError, CdWriter, device="dev/null", scsiId="0,0,0", unittest=True) def testConstructor_008(self): """ Test the constructor with an absolute device path that does not exist. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="/bogus", scsiId="0,0,0", unittest=False) def testConstructor_009(self): """ Test the constructor with an absolute device path that does not exist. Use a valid SCSI id and defaults for the remaining arguments. Make sure that C{unittest=True}. """ writer = CdWriter(device="/bogus", scsiId="0,0,0", unittest=True) self.failUnlessEqual("/bogus", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_010(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 0 for the drive speed. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", driveSpeed=0, unittest=False) def testConstructor_011(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 0 for the drive speed. Make sure that C{unittest=True}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", driveSpeed=0, unittest=True) def testConstructor_012(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 1 for the drive speed. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", driveSpeed=1, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(1, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_013(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a value of 5 for the drive speed. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", driveSpeed=5, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(5, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_014(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and an invalid media type. Make sure that C{unittest=False}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", mediaType=42, unittest=False) def testConstructor_015(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and an invalid media type. Make sure that C{unittest=True}. """ self.failUnlessRaises(ValueError, CdWriter, device="/dev/null", scsiId="0,0,0", mediaType=42, unittest=True) def testConstructor_016(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDR_74. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDR_74, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDR_74, writer.media.mediaType) self.failUnlessEqual(False, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_017(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDRW_74. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDRW_74, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_74, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_018(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDR_80. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDR_80, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDR_80, writer.media.mediaType) self.failUnlessEqual(False, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_019(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use a valid SCSI id and a media type of MEDIA_CDRW_80. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId="0,0,0", mediaType=MEDIA_CDRW_80, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual("0,0,0", writer.scsiId) self.failUnlessEqual("0,0,0", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_80, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_020(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use None for SCSI id and a media type of MEDIA_CDRW_80. Make sure that C{unittest=True}. """ writer = CdWriter(device="/dev/null", scsiId=None, mediaType=MEDIA_CDRW_80, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual(None, writer.scsiId) self.failUnlessEqual("/dev/null", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_80, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(False, writer._noEject) def testConstructor_021(self): """ Test the constructor with device C{/dev/null}, which is writable and exists. Use None for SCSI id and a media type of MEDIA_CDRW_80. Make sure that C{unittest=True}. Use C{noEject=True}. """ writer = CdWriter(device="/dev/null", scsiId=None, mediaType=MEDIA_CDRW_80, noEject=True, unittest=True) self.failUnlessEqual("/dev/null", writer.device) self.failUnlessEqual(None, writer.scsiId) self.failUnlessEqual("/dev/null", writer.hardwareId) self.failUnlessEqual(None, writer.driveSpeed) self.failUnlessEqual(MEDIA_CDRW_80, writer.media.mediaType) self.failUnlessEqual(True, writer.isRewritable()) self.failUnlessEqual(True, writer._noEject) #################################### # Test the capacity-related methods #################################### def testCapacity_001(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDR_74. """ expectedAvailable = MB650-ILEAD # 650 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDR_74) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(0, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual(None, capacity.boundaries) def testCapacity_002(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDRW_74. """ expectedAvailable = MB650-ILEAD # 650 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDRW_74) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(0, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual(None, capacity.boundaries) def testCapacity_003(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDR_80. """ expectedAvailable = MB700-ILEAD # 700 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDR_80) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(0, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual(None, capacity.boundaries) def testCapacity_004(self): """ Test _calculateCapacity for boundaries of None and MEDIA_CDRW_80. """ expectedAvailable = MB700-ILEAD # 700 MB, minus initial lead-in media = MediaDefinition(MEDIA_CDRW_80) boundaries = None capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(0, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual(None, capacity.boundaries) def testCapacity_005(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDR_74. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDR_74) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 1), capacity.boundaries) def testCapacity_006(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDRW_74. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDRW_74) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 1), capacity.boundaries) def testCapacity_007(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDR_80. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDR_80) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) # 700 MB - lead-in - 1 sector self.failUnlessEqual((0, 1), capacity.boundaries) def testCapacity_008(self): """ Test _calculateCapacity for boundaries of (0, 1) and MEDIA_CDRW_80. """ expectedUsed = (1*2048.0) # 1 sector expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1 sector media = MediaDefinition(MEDIA_CDRW_80) boundaries = (0, 1) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 1), capacity.boundaries) def testCapacity_009(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDR_74. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDR_74) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 999), capacity.boundaries) def testCapacity_010(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDRW_74. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDRW_74) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 999), capacity.boundaries) def testCapacity_011(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDR_80. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDR_80) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 999), capacity.boundaries) def testCapacity_012(self): """ Test _calculateCapacity for boundaries of (0, 999) and MEDIA_CDRW_80. """ expectedUsed = (999*2048.0) # 999 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 999 sectors media = MediaDefinition(MEDIA_CDRW_80) boundaries = (0, 999) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 999), capacity.boundaries) def testCapacity_013(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDR_74. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDR_74) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((500, 1000), capacity.boundaries) def testCapacity_014(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDRW_74. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB650-SLEAD-expectedUsed # 650 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDRW_74) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((500, 1000), capacity.boundaries) def testCapacity_015(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDR_80. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDR_80) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((500, 1000), capacity.boundaries) def testCapacity_016(self): """ Test _calculateCapacity for boundaries of (500, 1000) and MEDIA_CDRW_80. """ expectedUsed = (1000*2048.0) # 1000 sectors expectedAvailable = MB700-SLEAD-expectedUsed # 700 MB, minus session lead-in, minus 1000 sectors media = MediaDefinition(MEDIA_CDRW_80) boundaries = (500, 1000) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) # 650 MB minus lead-in self.failUnlessEqual((500, 1000), capacity.boundaries) def testCapacity_017(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=False, useMulti=True. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=False, useMulti=True) self.failUnlessEqual(None, boundaries) def testCapacity_018(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=True, useMulti=True. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=True, useMulti=True) self.failUnlessEqual(None, boundaries) def testCapacity_019(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=True, useMulti=False. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=False, useMulti=False) self.failUnlessEqual(None, boundaries) def testCapacity_020(self): """ Test _getBoundaries when self.deviceSupportsMulti is False; entireDisc=False, useMulti=False. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = False boundaries = writer._getBoundaries(entireDisc=False, useMulti=False) self.failUnlessEqual(None, boundaries) def testCapacity_021(self): """ Test _getBoundaries when self.deviceSupportsMulti is True; entireDisc=True, useMulti=True. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = True boundaries = writer._getBoundaries(entireDisc=True, useMulti=True) self.failUnlessEqual(None, boundaries) def testCapacity_022(self): """ Test _getBoundaries when self.deviceSupportsMulti is True; entireDisc=True, useMulti=False. """ writer = CdWriter(device="/dev/cdrw", scsiId="0,0,0", unittest=True) writer._deviceSupportsMulti = True boundaries = writer._getBoundaries(entireDisc=True, useMulti=False) self.failUnlessEqual(None, boundaries) def testCapacity_023(self): """ Test _calculateCapacity for boundaries of (321342, 330042) and MEDIA_CDRW_74. This was a bug fixed for v2.1.2. """ expectedUsed = (330042*2048.0) # 330042 sectors expectedAvailable = 0 # nothing should be available media = MediaDefinition(MEDIA_CDRW_74) boundaries = (321342, 330042) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((321342, 330042), capacity.boundaries) def testCapacity_024(self): """ Test _calculateCapacity for boundaries of (0, 330042) and MEDIA_CDRW_74. This was a bug fixed for v2.1.3. """ expectedUsed = (330042*2048.0) # 330042 sectors expectedAvailable = 0 # nothing should be available media = MediaDefinition(MEDIA_CDRW_74) boundaries = (0, 330042) capacity = CdWriter._calculateCapacity(media, boundaries) self.failUnlessEqual(expectedUsed, capacity.bytesUsed) self.failUnlessEqual(expectedAvailable, capacity.bytesAvailable) self.failUnlessEqual((0, 330042), capacity.boundaries) ######################################### # Test methods that build argument lists ######################################### def testBuildArgs_001(self): """ Test _buildOpenTrayArgs(). """ args = CdWriter._buildOpenTrayArgs(device="/dev/stuff") self.failUnlessEqual(["/dev/stuff", ], args) def testBuildArgs_002(self): """ Test _buildCloseTrayArgs(). """ args = CdWriter._buildCloseTrayArgs(device="/dev/stuff") self.failUnlessEqual(["-t", "/dev/stuff", ], args) def testBuildArgs_003(self): """ Test _buildPropertiesArgs(). """ args = CdWriter._buildPropertiesArgs(hardwareId="0,0,0") self.failUnlessEqual(["-prcap", "dev=0,0,0", ], args) def testBuildArgs_004(self): """ Test _buildBoundariesArgs(). """ args = CdWriter._buildBoundariesArgs(hardwareId="ATA:0,0,0") self.failUnlessEqual(["-msinfo", "dev=ATA:0,0,0", ], args) def testBuildArgs_005(self): """ Test _buildBoundariesArgs(). """ args = CdWriter._buildBoundariesArgs(hardwareId="ATAPI:0,0,0") self.failUnlessEqual(["-msinfo", "dev=ATAPI:0,0,0", ], args) def testBuildArgs_006(self): """ Test _buildBlankArgs(), default drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATA:0,0,0") self.failUnlessEqual(["-v", "blank=fast", "dev=ATA:0,0,0", ], args) def testBuildArgs_007(self): """ Test _buildBlankArgs(), default drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATAPI:0,0,0") self.failUnlessEqual(["-v", "blank=fast", "dev=ATAPI:0,0,0", ], args) def testBuildArgs_008(self): """ Test _buildBlankArgs(), with None for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="0,0,0", driveSpeed=None) self.failUnlessEqual(["-v", "blank=fast", "dev=0,0,0", ], args) def testBuildArgs_009(self): """ Test _buildBlankArgs(), with 1 for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="0,0,0", driveSpeed=1) self.failUnlessEqual(["-v", "blank=fast", "speed=1", "dev=0,0,0", ], args) def testBuildArgs_010(self): """ Test _buildBlankArgs(), with 5 for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATA:1,2,3", driveSpeed=5) self.failUnlessEqual(["-v", "blank=fast", "speed=5", "dev=ATA:1,2,3", ], args) def testBuildArgs_011(self): """ Test _buildBlankArgs(), with 5 for drive speed. """ args = CdWriter._buildBlankArgs(hardwareId="ATAPI:1,2,3", driveSpeed=5) self.failUnlessEqual(["-v", "blank=fast", "speed=5", "dev=ATAPI:1,2,3", ], args) def testBuildArgs_012(self): """ Test _buildWriteArgs(), default drive speed and writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever") self.failUnlessEqual(["-v", "dev=0,0,0", "-multi", "-data", "/whatever" ], args) def testBuildArgs_013(self): """ Test _buildWriteArgs(), None for drive speed, True for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever", driveSpeed=None, writeMulti=True) self.failUnlessEqual(["-v", "dev=0,0,0", "-multi", "-data", "/whatever" ], args) def testBuildArgs_014(self): """ Test _buildWriteArgs(), None for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever", driveSpeed=None, writeMulti=False) self.failUnlessEqual(["-v", "dev=0,0,0", "-data", "/whatever" ], args) def testBuildArgs_015(self): """ Test _buildWriteArgs(), 1 for drive speed, True for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/whatever", driveSpeed=1, writeMulti=True) self.failUnlessEqual(["-v", "speed=1", "dev=0,0,0", "-multi", "-data", "/whatever" ], args) def testBuildArgs_016(self): """ Test _buildWriteArgs(), 5 for drive speed, True for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,1,2", imagePath="/whatever", driveSpeed=5, writeMulti=True) self.failUnlessEqual(["-v", "speed=5", "dev=0,1,2", "-multi", "-data", "/whatever" ], args) def testBuildArgs_017(self): """ Test _buildWriteArgs(), 1 for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="0,0,0", imagePath="/dvl/stuff/whatever/more", driveSpeed=1, writeMulti=False) self.failUnlessEqual(["-v", "speed=1", "dev=0,0,0", "-data", "/dvl/stuff/whatever/more" ], args) def testBuildArgs_018(self): """ Test _buildWriteArgs(), 5 for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="ATA:1,2,3", imagePath="/whatever", driveSpeed=5, writeMulti=False) self.failUnlessEqual(["-v", "speed=5", "dev=ATA:1,2,3", "-data", "/whatever" ], args) def testBuildArgs_019(self): """ Test _buildWriteArgs(), 5 for drive speed, False for writeMulti. """ args = CdWriter._buildWriteArgs(hardwareId="ATAPI:1,2,3", imagePath="/whatever", driveSpeed=5, writeMulti=False) self.failUnlessEqual(["-v", "speed=5", "dev=ATAPI:1,2,3", "-data", "/whatever" ], args) ########################################## # Test methods that parse cdrecord output ########################################## def testParseOutput_001(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example. """ output = [ "268582,302230\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((268582, 302230), boundaries) def testParseOutput_002(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, lots of extra whitespace around the values. """ output = [ " 268582 , 302230 \n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((268582, 302230), boundaries) def testParseOutput_003(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, lots of extra garbage after the first line. """ output = [ "268582,302230\n", "more\n", "bogus\n", "crap\n", "here\n", "to\n", "confuse\n", "things\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((268582, 302230), boundaries) def testParseOutput_004(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, lots of extra garbage before the first line. """ output = [ "more\n", "bogus\n", "crap\n", "here\n", "to\n", "confuse\n", "things\n", "268582,302230\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_005(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to negative. """ output = [ "-268582,302230\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_006(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with second value converted to negative. """ output = [ "268582,-302230\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_007(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to zero. """ output = [ "0,302230\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((0, 302230), boundaries) def testParseOutput_008(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with second value converted to zero. """ output = [ "268582,0\n", ] boundaries = CdWriter._parseBoundariesOutput(output) self.failUnlessEqual((268582, 0), boundaries) def testParseOutput_009(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to negative and second value converted to zero. """ output = [ "-268582,0\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_010(self): """ Test _parseBoundariesOutput() for valid data, taken from a real example, with first value converted to zero and second value converted to negative. """ output = [ "0,-302230\n", ] self.failUnlessRaises(IOError, CdWriter._parseBoundariesOutput, output) def testParseOutput_011(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_012(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including only stdout. """ output = ['Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_013(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, device type removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual(None, deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_014(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, device vendor removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual(None, deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_015(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, device id removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual(None, deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_016(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, buffer size removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(None, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_017(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, "supports multi" removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(False, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_018(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, "has tray" removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Does support ejection of CD via START/STOP command\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(False, deviceHasTray) self.failUnlessEqual(True, deviceCanEject) def testParseOutput_019(self): """ Test _parsePropertiesOutput() for valid data, taken from a real example, including stderr and stdout mixed together, "can eject" removed. """ output = ["scsidev: '0,0,0'\n", 'scsibus: 0 target: 0 lun: 0\n', 'Linux sg driver version: 3.1.22\n', 'Cdrecord 1.10 (i686-pc-linux-gnu) Copyright (C) 1995-2001 J\xf6rg Schilling\n', "Using libscg version 'schily-0.5'\n", 'Device type : Removable CD-ROM\n', 'Version : 0\n', 'Response Format: 1\n', "Vendor_info : 'SONY '\n", "Identifikation : 'CD-RW CRX140E '\n", "Revision : '1.0n'\n", 'Device seems to be: Generic mmc CD-RW.\n', '\n', 'Drive capabilities, per page 2A:\n', '\n', ' Does read CD-R media\n', ' Does write CD-R media\n', ' Does read CD-RW media\n', ' Does write CD-RW media\n', ' Does not read DVD-ROM media\n', ' Does not read DVD-R media\n', ' Does not write DVD-R media\n', ' Does not read DVD-RAM media\n', ' Does not write DVD-RAM media\n', ' Does support test writing\n', '\n', ' Does read Mode 2 Form 1 blocks\n', ' Does read Mode 2 Form 2 blocks\n', ' Does read digital audio blocks\n', ' Does restart non-streamed digital audio reads accurately\n', ' Does not support BURN-Proof (Sanyo)\n', ' Does read multi-session CDs\n', ' Does read fixed-packet CD media using Method 2\n', ' Does not read CD bar code\n', ' Does not read R-W subcode information\n', ' Does read raw P-W subcode data from lead in\n', ' Does return CD media catalog number\n', ' Does return CD ISRC information\n', ' Does not support C2 error pointers\n', ' Does not deliver composite A/V data\n', '\n', ' Does play audio CDs\n', ' Number of volume control levels: 256\n', ' Does support individual volume control setting for each channel\n', ' Does support independent mute setting for each channel\n', ' Does not support digital output on port 1\n', ' Does not support digital output on port 2\n', '\n', ' Loading mechanism type: tray\n', ' Does not lock media on power up via prevent jumper\n', ' Does allow media to be locked in the drive via PREVENT/ALLOW command\n', ' Is not currently in a media-locked state\n', ' Does not support changing side of disk\n', ' Does not have load-empty-slot-in-changer feature\n', ' Does not support Individual Disk Present feature\n', '\n', ' Maximum read speed in kB/s: 5645\n', ' Current read speed in kB/s: 3528\n', ' Maximum write speed in kB/s: 1411\n', ' Current write speed in kB/s: 706\n', ' Buffer size in KB: 4096\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual("Removable CD-ROM", deviceType) self.failUnlessEqual("SONY", deviceVendor) self.failUnlessEqual("CD-RW CRX140E", deviceId) self.failUnlessEqual(4096.0*1024.0, deviceBufferSize) self.failUnlessEqual(True, deviceSupportsMulti) self.failUnlessEqual(True, deviceHasTray) self.failUnlessEqual(False, deviceCanEject) def testParseOutput_020(self): """ Test _parsePropertiesOutput() for nonsensical data, just a bunch of empty lines. """ output = [ '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', '\n', ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual(None, deviceType) self.failUnlessEqual(None, deviceVendor) self.failUnlessEqual(None, deviceId) self.failUnlessEqual(None, deviceBufferSize) self.failUnlessEqual(False, deviceSupportsMulti) self.failUnlessEqual(False, deviceHasTray) self.failUnlessEqual(False, deviceCanEject) def testParseOutput_021(self): """ Test _parsePropertiesOutput() for nonsensical data, just an empty list. """ output = [ ] (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject) = CdWriter._parsePropertiesOutput(output) self.failUnlessEqual(None, deviceType) self.failUnlessEqual(None, deviceVendor) self.failUnlessEqual(None, deviceId) self.failUnlessEqual(None, deviceBufferSize) self.failUnlessEqual(False, deviceSupportsMulti) self.failUnlessEqual(False, deviceHasTray) self.failUnlessEqual(False, deviceCanEject) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestMediaDefinition, 'test'), unittest.makeSuite(TestMediaCapacity, 'test'), unittest.makeSuite(TestCdWriter, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/peertests.py0000664000175000017500000017420012560016766021776 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests peer functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/peer.py. Code Coverage ============= This module contains individual tests for most of the public functions and classes implemented in peer.py, including the C{LocalPeer} and C{RemotePeer} classes. Unfortunately, some of the code can't be tested. In particular, the stage code allows the caller to change ownership on files. Generally, this can only be done by root, and most people won't be running these tests as root. As such, we can't test this functionality. There are also some other pieces of functionality that can only be tested in certain environments (see below). Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set PEERTESTS_FULL to "Y" in the environment. In this module, network-related testing is what causes us our biggest problems. In order to test the RemotePeer, we need a "remote" host that we can rcp to and from. We want to fall back on using localhost and the current user, but that might not be safe or appropriate. As such, we'll only run these tests if PEERTESTS_FULL is set to "Y" in the environment. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # Import standard modules import os import stat import unittest import tempfile from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar from CedarBackup2.testutil import getMaskAsMode, getLogin, runningAsRoot, failUnlessAssignRaises from CedarBackup2.testutil import platformSupportsPermissions, platformWindows, platformCygwin from CedarBackup2.peer import LocalPeer, RemotePeer from CedarBackup2.peer import DEF_RCP_COMMAND, DEF_RSH_COMMAND from CedarBackup2.peer import DEF_COLLECT_INDICATOR, DEF_STAGE_INDICATOR ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree1.tar.gz", "tree2.tar.gz", "tree9.tar.gz", ] REMOTE_HOST = "localhost" # Always use login@localhost as our "remote" host NONEXISTENT_FILE = "bogus" # This file name should never exist NONEXISTENT_HOST = "hostname.invalid" # RFC 2606 reserves the ".invalid" TLD for "obviously invalid" names NONEXISTENT_USER = "unittestuser" # This user name should never exist on localhost NONEXISTENT_CMD = "/bogus/~~~ZZZZ/bad/not/there" # This command should never exist in the filesystem ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "PEERTESTS_FULL" in os.environ: return os.environ["PEERTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ###################### # TestLocalPeer class ###################### class TestLocalPeer(unittest.TestCase): """Tests for the LocalPeer class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def getFileMode(self, components): """Calls buildPath on components and then returns file mode for the file.""" return stat.S_IMODE(os.stat(self.buildPath(components)).st_mode) def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ########################### # Test basic functionality ########################### def testBasic_001(self): """ Make sure exception is thrown for non-absolute collect directory. """ name = "peer1" collectDir = "whatever/something/else/not/absolute" self.failUnlessRaises(ValueError, LocalPeer, name, collectDir) def testBasic_002(self): """ Make sure attributes are set properly for valid constructor input. """ name = "peer1" collectDir = "/absolute/path/name" ignoreFailureMode = "all" peer = LocalPeer(name, collectDir, ignoreFailureMode) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(ignoreFailureMode, peer.ignoreFailureMode) def testBasic_003(self): """ Make sure attributes are set properly for valid constructor input, with spaces in the collect directory path. """ name = "peer1" collectDir = "/ absolute / path/ name " peer = LocalPeer(name, collectDir) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) def testBasic_004(self): """ Make sure assignment works for all valid failure modes. """ name = "peer1" collectDir = "/absolute/path/name" ignoreFailureMode = "all" peer = LocalPeer(name, collectDir, ignoreFailureMode) self.failUnlessEqual("all", peer.ignoreFailureMode) peer.ignoreFailureMode = "none" self.failUnlessEqual("none", peer.ignoreFailureMode) peer.ignoreFailureMode = "daily" self.failUnlessEqual("daily", peer.ignoreFailureMode) peer.ignoreFailureMode = "weekly" self.failUnlessEqual("weekly", peer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, peer, "ignoreFailureMode", "bogus") ############################### # Test checkCollectIndicator() ############################### def testCheckCollectIndicator_001(self): """ Attempt to check collect indicator with non-existent collect directory. """ name = "peer1" collectDir = self.buildPath([NONEXISTENT_FILE, ]) self.failUnless(not os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_002(self): """ Attempt to check collect indicator with non-readable collect directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) os.chmod(collectDir, 0200) # user can't read his own directory peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) os.chmod(collectDir, 0777) # so we can remove it safely def testCheckCollectIndicator_003(self): """ Attempt to check collect indicator collect indicator file that does not exist. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_004(self): """ Attempt to check collect indicator collect indicator file that does not exist, custom name. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", NONEXISTENT_FILE, ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator(collectIndicator=NONEXISTENT_FILE) self.failUnlessEqual(False, result) def testCheckCollectIndicator_005(self): """ Attempt to check collect indicator collect indicator file that does exist. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) os.mkdir(collectDir) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(True, result) def testCheckCollectIndicator_006(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name. """ name = "peer1" collectDir = self.buildPath(["collect", ]) collectIndicator = self.buildPath(["collect", "different", ]) os.mkdir(collectDir) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator(collectIndicator="different") self.failUnlessEqual(True, result) def testCheckCollectIndicator_007(self): """ Attempt to check collect indicator collect indicator file that does exist, with spaces in the collect directory path. """ name = "peer1" collectDir = self.buildPath(["collect directory here", ]) collectIndicator = self.buildPath(["collect directory here", DEF_COLLECT_INDICATOR, ]) os.mkdir(collectDir) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator() self.failUnlessEqual(True, result) def testCheckCollectIndicator_008(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name, with spaces in the collect directory path and collect indicator file name. """ name = "peer1" if platformWindows() or platformCygwin(): # os.listdir has problems with trailing spaces collectDir = self.buildPath([" collect dir", ]) collectIndicator = self.buildPath([" collect dir", "different, file", ]) else: collectDir = self.buildPath([" collect dir ", ]) collectIndicator = self.buildPath([" collect dir ", "different, file", ]) os.mkdir(collectDir) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(collectIndicator)) peer = LocalPeer(name, collectDir) result = peer.checkCollectIndicator(collectIndicator="different, file") self.failUnlessEqual(True, result) ############################# # Test writeStageIndicator() ############################# def testWriteStageIndicator_001(self): """ Attempt to write stage indicator with non-existent collect directory. """ name = "peer1" collectDir = self.buildPath([NONEXISTENT_FILE, ]) self.failUnless(not os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.writeStageIndicator) def testWriteStageIndicator_002(self): """ Attempt to write stage indicator with non-writable collect directory. """ if not runningAsRoot(): # root doesn't get this error name = "peer1" collectDir = self.buildPath(["collect", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) os.chmod(collectDir, 0500) # read-only for user peer = LocalPeer(name, collectDir) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) os.chmod(collectDir, 0777) # so we can remove it safely def testWriteStageIndicator_003(self): """ Attempt to write stage indicator with non-writable collect directory, custom name. """ if not runningAsRoot(): # root doesn't get this error name = "peer1" collectDir = self.buildPath(["collect", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) os.chmod(collectDir, 0500) # read-only for user peer = LocalPeer(name, collectDir) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator, stageIndicator="something") os.chmod(collectDir, 0777) # so we can remove it safely def testWriteStageIndicator_004(self): """ Attempt to write stage indicator in a valid directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) stageIndicator = self.buildPath(["collect", DEF_STAGE_INDICATOR, ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator() self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_005(self): """ Attempt to write stage indicator in a valid directory, custom name. """ name = "peer1" collectDir = self.buildPath(["collect", ]) stageIndicator = self.buildPath(["collect", "whatever", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator(stageIndicator="whatever") self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_006(self): """ Attempt to write stage indicator in a valid directory, with spaces in the directory name. """ name = "peer1" collectDir = self.buildPath(["collect from this directory", ]) stageIndicator = self.buildPath(["collect from this directory", DEF_STAGE_INDICATOR, ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator() self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_007(self): """ Attempt to write stage indicator in a valid directory, custom name, with spaces in the directory name and the file name. """ name = "peer1" collectDir = self.buildPath(["collect ME", ]) stageIndicator = self.buildPath(["collect ME", " whatever-it-takes you", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) peer.writeStageIndicator(stageIndicator=" whatever-it-takes you") self.failUnless(os.path.exists(stageIndicator)) ################### # Test stagePeer() ################### def testStagePeer_001(self): """ Attempt to stage files with non-existent collect directory. """ name = "peer1" collectDir = self.buildPath([NONEXISTENT_FILE, ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(not os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_002(self): """ Attempt to stage files with non-readable collect directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) targetDir = self.buildPath(["target", ]) os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) os.chmod(collectDir, 0200) # user can't read his own directory peer = LocalPeer(name, collectDir) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(collectDir, 0777) # so we can remove it safely def testStagePeer_003(self): """ Attempt to stage files with non-absolute target directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) targetDir = "this/is/not/absolute" os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_004(self): """ Attempt to stage files with non-existent target directory. """ name = "peer1" collectDir = self.buildPath(["collect", ]) targetDir = self.buildPath(["target", ]) os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_005(self): """ Attempt to stage files with non-writable target directory. """ if not runningAsRoot(): # root doesn't get this error self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1"]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) os.chmod(targetDir, 0500) # read-only for user peer = LocalPeer(name, collectDir) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(targetDir, 0777) # so we can remove it safely self.failUnlessEqual(0, len(os.listdir(targetDir))) def testStagePeer_006(self): """ Attempt to stage files with empty collect directory. @note: This test assumes that scp returns an error if the directory is empty. """ self.extractTar("tree2") name = "peer1" collectDir = self.buildPath(["tree2", "dir001", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(IOError, peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual([], stagedFiles) def testStagePeer_007(self): """ Attempt to stage files with empty collect directory, where the target directory name contains spaces. """ self.extractTar("tree2") name = "peer1" collectDir = self.buildPath(["tree2", "dir001", ]) if platformWindows(): targetDir = self.buildPath([" target directory", ]) # os.listdir has problems with trailing spaces else: targetDir = self.buildPath([" target directory ", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = LocalPeer(name, collectDir) self.failUnlessRaises(IOError, peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual([], stagedFiles) def testStagePeer_008(self): """ Attempt to stage files with non-empty collect directory. """ self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) count = peer.stagePeer(targetDir=targetDir) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) def testStagePeer_009(self): """ Attempt to stage files with non-empty collect directory, where the target directory name contains spaces. """ self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1", ]) targetDir = self.buildPath(["target directory place", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) count = peer.stagePeer(targetDir=targetDir) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) def testStagePeer_010(self): """ Attempt to stage files with non-empty collect directory containing links and directories. """ self.extractTar("tree9") name = "peer1" collectDir = self.buildPath(["tree9", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_011(self): """ Attempt to stage files with non-empty collect directory and attempt to set valid permissions. """ if platformSupportsPermissions(): self.extractTar("tree1") name = "peer1" collectDir = self.buildPath(["tree1", ]) targetDir = self.buildPath(["target", ]) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = LocalPeer(name, collectDir) if getMaskAsMode() == 0400: permissions = 0642 # arbitrary, but different than umask would give else: permissions = 0400 # arbitrary count = peer.stagePeer(targetDir=targetDir, permissions=permissions) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) self.failUnlessEqual(permissions, self.getFileMode(["target", "file001", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file002", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file003", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file004", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file005", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file006", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file007", ])) ###################### # TestRemotePeer class ###################### class TestRemotePeer(unittest.TestCase): """Tests for the RemotePeer class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def getFileMode(self, components): """Calls buildPath on components and then returns file mode for the file.""" return stat.S_IMODE(os.stat(self.buildPath(components)).st_mode) def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Tests basic functionality ############################ def testBasic_001(self): """ Make sure exception is thrown for non-absolute collect or working directory. """ name = REMOTE_HOST collectDir = "whatever/something/else/not/absolute" workingDir = "/tmp" remoteUser = getLogin() self.failUnlessRaises(ValueError, RemotePeer, name, collectDir, workingDir, remoteUser) name = REMOTE_HOST collectDir = "/whatever/something/else/not/absolute" workingDir = "tmp" remoteUser = getLogin() self.failUnlessRaises(ValueError, RemotePeer, name, collectDir, workingDir, remoteUser) def testBasic_002(self): """ Make sure attributes are set properly for valid constructor input. """ name = REMOTE_HOST collectDir = "/absolute/path/name" workingDir = "/tmp" remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(workingDir, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(None, peer.rshCommand) self.failUnlessEqual(None, peer.cbackCommand) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(DEF_RSH_COMMAND, peer._rshCommandList) self.failUnlessEqual(None, peer.ignoreFailureMode) def testBasic_003(self): """ Make sure attributes are set properly for valid constructor input, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = "/absolute/path/to/ a large directory" workingDir = "/tmp" remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(workingDir, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(None, peer.rshCommand) self.failUnlessEqual(None, peer.cbackCommand) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(DEF_RSH_COMMAND, peer._rshCommandList) def testBasic_004(self): """ Make sure attributes are set properly for valid constructor input, custom rcp command. """ name = REMOTE_HOST collectDir = "/absolute/path/name" workingDir = "/tmp" remoteUser = getLogin() rcpCommand = "rcp -one --two three \"four five\" 'six seven' eight" peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(workingDir, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(rcpCommand, peer.rcpCommand) self.failUnlessEqual(None, peer.rshCommand) self.failUnlessEqual(None, peer.cbackCommand) self.failUnlessEqual(["rcp", "-one", "--two", "three", "four five", "'six", "seven'", "eight", ], peer._rcpCommandList) self.failUnlessEqual(DEF_RSH_COMMAND, peer._rshCommandList) def testBasic_005(self): """ Make sure attributes are set properly for valid constructor input, custom local user command. """ name = REMOTE_HOST collectDir = "/absolute/path/to/ a large directory" workingDir = "/tmp" remoteUser = getLogin() localUser = "pronovic" peer = RemotePeer(name, collectDir, workingDir, remoteUser, localUser=localUser) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(collectDir, peer.collectDir) self.failUnlessEqual(workingDir, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(localUser, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(DEF_RSH_COMMAND, peer._rshCommandList) def testBasic_006(self): """ Make sure attributes are set properly for valid constructor input, custom rsh command. """ name = REMOTE_HOST remoteUser = getLogin() rshCommand = "rsh --whatever -something \"a b\" else" peer = RemotePeer(name, remoteUser=remoteUser, rshCommand=rshCommand) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(None, peer.collectDir) self.failUnlessEqual(None, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(rshCommand, peer.rshCommand) self.failUnlessEqual(None, peer.cbackCommand) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(DEF_RCP_COMMAND, peer._rcpCommandList) self.failUnlessEqual(["rsh", "--whatever", "-something", "a b", "else", ], peer._rshCommandList) def testBasic_007(self): """ Make sure attributes are set properly for valid constructor input, custom cback command. """ name = REMOTE_HOST remoteUser = getLogin() cbackCommand = "cback --config=whatever --logfile=whatever --mode=064" peer = RemotePeer(name, remoteUser=remoteUser, cbackCommand=cbackCommand) self.failUnlessEqual(name, peer.name) self.failUnlessEqual(None, peer.collectDir) self.failUnlessEqual(None, peer.workingDir) self.failUnlessEqual(remoteUser, peer.remoteUser) self.failUnlessEqual(None, peer.localUser) self.failUnlessEqual(None, peer.rcpCommand) self.failUnlessEqual(None, peer.rshCommand) self.failUnlessEqual(cbackCommand, peer.cbackCommand) def testBasic_008(self): """ Make sure assignment works for all valid failure modes. """ peer = RemotePeer(name="name", remoteUser="user", ignoreFailureMode="all") self.failUnlessEqual("all", peer.ignoreFailureMode) peer.ignoreFailureMode = "none" self.failUnlessEqual("none", peer.ignoreFailureMode) peer.ignoreFailureMode = "daily" self.failUnlessEqual("daily", peer.ignoreFailureMode) peer.ignoreFailureMode = "weekly" self.failUnlessEqual("weekly", peer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, peer, "ignoreFailureMode", "bogus") ############################### # Test checkCollectIndicator() ############################### def testCheckCollectIndicator_001(self): """ Attempt to check collect indicator with invalid hostname. """ name = NONEXISTENT_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_002(self): """ Attempt to check collect indicator with invalid remote user. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = NONEXISTENT_USER os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_003(self): """ Attempt to check collect indicator with invalid rcp command. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() rcpCommand = NONEXISTENT_CMD os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_004(self): """ Attempt to check collect indicator with non-existent collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() self.failUnless(not os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_005(self): """ Attempt to check collect indicator with non-readable collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) os.chmod(collectDir, 0200) # user can't read his own directory peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) os.chmod(collectDir, 0777) # so we can remove it safely def testCheckCollectIndicator_006(self): """ Attempt to check collect indicator collect indicator file that does not exist. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_007(self): """ Attempt to check collect indicator collect indicator file that does not exist, custom name. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", NONEXISTENT_FILE, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_008(self): """ Attempt to check collect indicator collect indicator file that does not exist, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["collect directory path", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect directory path", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_009(self): """ Attempt to check collect indicator collect indicator file that does not exist, custom name, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath([" you collect here ", ]) workingDir = "/tmp" collectIndicator = self.buildPath([" you collect here ", NONEXISTENT_FILE, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(False, result) def testCheckCollectIndicator_010(self): """ Attempt to check collect indicator collect indicator file that does exist. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(True, result) def testCheckCollectIndicator_011(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect", "whatever", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator(collectIndicator="whatever") self.failUnlessEqual(True, result) def testCheckCollectIndicator_012(self): """ Attempt to check collect indicator collect indicator file that does exist, where the collect directory contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["collect NOT", ]) workingDir = "/tmp" collectIndicator = self.buildPath(["collect NOT", DEF_COLLECT_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator() self.failUnlessEqual(True, result) def testCheckCollectIndicator_013(self): """ Attempt to check collect indicator collect indicator file that does exist, custom name, where the collect directory and indicator file contain spaces. """ name = REMOTE_HOST collectDir = self.buildPath([" from here collect!", ]) workingDir = "/tmp" collectIndicator = self.buildPath([" from here collect!", "whatever, dude", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) open(collectIndicator, "w").write("") # touch the file self.failUnless(os.path.exists(collectIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) result = peer.checkCollectIndicator(collectIndicator="whatever, dude") self.failUnlessEqual(True, result) ############################# # Test writeStageIndicator() ############################# def testWriteStageIndicator_001(self): """ Attempt to write stage indicator with invalid hostname. """ name = NONEXISTENT_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) remoteUser = getLogin() peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) def testWriteStageIndicator_002(self): """ Attempt to write stage indicator with invalid remote user. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = NONEXISTENT_USER os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) def testWriteStageIndicator_003(self): """ Attempt to write stage indicator with invalid rcp command. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() rcpCommand = NONEXISTENT_CMD os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) def testWriteStageIndicator_004(self): """ Attempt to write stage indicator with non-existent collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" remoteUser = getLogin() self.failUnless(not os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises(IOError, peer.writeStageIndicator) def testWriteStageIndicator_005(self): """ Attempt to write stage indicator with non-writable collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect", DEF_STAGE_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) os.chmod(collectDir, 0400) # read-only for user peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.writeStageIndicator) self.failUnless(not os.path.exists(stageIndicator)) os.chmod(collectDir, 0777) # so we can remove it safely def testWriteStageIndicator_006(self): """ Attempt to write stage indicator in a valid directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect", DEF_STAGE_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator() self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_007(self): """ Attempt to write stage indicator in a valid directory, custom name. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect", "newname", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator(stageIndicator="newname") self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_008(self): """ Attempt to write stage indicator in a valid directory that contains spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["with spaces collect", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["with spaces collect", DEF_STAGE_INDICATOR, ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator() self.failUnless(os.path.exists(stageIndicator)) def testWriteStageIndicator_009(self): """ Attempt to write stage indicator in a valid directory, custom name, where the collect directory and the custom name contain spaces. """ name = REMOTE_HOST collectDir = self.buildPath(["collect, soon", ]) workingDir = "/tmp" stageIndicator = self.buildPath(["collect, soon", "new name with spaces", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(stageIndicator)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) peer.writeStageIndicator(stageIndicator="new name with spaces") self.failUnless(os.path.exists(stageIndicator)) ################### # Test stagePeer() ################### def testStagePeer_001(self): """ Attempt to stage files with invalid hostname. """ name = NONEXISTENT_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_002(self): """ Attempt to stage files with invalid remote user. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = NONEXISTENT_USER os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_003(self): """ Attempt to stage files with invalid rcp command. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() rcpCommand = NONEXISTENT_CMD os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser, rcpCommand) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_004(self): """ Attempt to stage files with non-existent collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(not os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) def testStagePeer_005(self): """ Attempt to stage files with non-readable collect directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) os.chmod(collectDir, 0200) # user can't read his own directory peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(collectDir, 0777) # so we can remove it safely def testStagePeer_006(self): """ Attempt to stage files with non-absolute target directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = "non/absolute/target" remoteUser = getLogin() self.failUnless(not os.path.exists(collectDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_007(self): """ Attempt to stage files with non-existent target directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(not os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises(ValueError, peer.stagePeer, targetDir=targetDir) def testStagePeer_008(self): """ Attempt to stage files with non-writable target directory. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) os.chmod(targetDir, 0400) # read-only for user peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) os.chmod(collectDir, 0777) # so we can remove it safely self.failUnlessEqual(0, len(os.listdir(targetDir))) def testStagePeer_009(self): """ Attempt to stage files with empty collect directory. @note: This test assumes that scp returns an error if the directory is empty. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual([], stagedFiles) def testStagePeer_010(self): """ Attempt to stage files with empty collect directory, with a target directory that contains spaces. @note: This test assumes that scp returns an error if the directory is empty. """ name = REMOTE_HOST collectDir = self.buildPath(["collect", ]) workingDir = "/tmp" targetDir = self.buildPath(["target DIR", ]) remoteUser = getLogin() os.mkdir(collectDir) os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual([], stagedFiles) def testStagePeer_011(self): """ Attempt to stage files with non-empty collect directory. """ self.extractTar("tree1") name = REMOTE_HOST collectDir = self.buildPath(["tree1", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) count = peer.stagePeer(targetDir=targetDir) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) def testStagePeer_012(self): """ Attempt to stage files with non-empty collect directory, with a target directory that contains spaces. """ self.extractTar("tree1") name = REMOTE_HOST collectDir = self.buildPath(["tree1", ]) workingDir = "/tmp" targetDir = self.buildPath(["write the target here, now!", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) count = peer.stagePeer(targetDir=targetDir) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) def testStagePeer_013(self): """ Attempt to stage files with non-empty collect directory containing links and directories. @note: We assume that scp copies the files even though it returns an error due to directories. """ self.extractTar("tree9") name = REMOTE_HOST collectDir = self.buildPath(["tree9", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) self.failUnlessRaises((IOError, OSError), peer.stagePeer, targetDir=targetDir) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(2, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) def testStagePeer_014(self): """ Attempt to stage files with non-empty collect directory and attempt to set valid permissions. """ self.extractTar("tree1") name = REMOTE_HOST collectDir = self.buildPath(["tree1", ]) workingDir = "/tmp" targetDir = self.buildPath(["target", ]) remoteUser = getLogin() os.mkdir(targetDir) self.failUnless(os.path.exists(collectDir)) self.failUnless(os.path.exists(targetDir)) self.failUnlessEqual(0, len(os.listdir(targetDir))) peer = RemotePeer(name, collectDir, workingDir, remoteUser) if getMaskAsMode() == 0400: permissions = 0642 # arbitrary, but different than umask would give else: permissions = 0400 # arbitrary count = peer.stagePeer(targetDir=targetDir, permissions=permissions) self.failUnlessEqual(7, count) stagedFiles = os.listdir(targetDir) self.failUnlessEqual(7, len(stagedFiles)) self.failUnless("file001" in stagedFiles) self.failUnless("file002" in stagedFiles) self.failUnless("file003" in stagedFiles) self.failUnless("file004" in stagedFiles) self.failUnless("file005" in stagedFiles) self.failUnless("file006" in stagedFiles) self.failUnless("file007" in stagedFiles) self.failUnlessEqual(permissions, self.getFileMode(["target", "file001", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file002", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file003", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file004", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file005", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file006", ])) self.failUnlessEqual(permissions, self.getFileMode(["target", "file007", ])) ############################## # Test executeRemoteCommand() ############################## def testExecuteRemoteCommand(self): """ Test that a simple remote command succeeds. """ target = self.buildPath(["test.txt", ]) name = REMOTE_HOST remoteUser = getLogin() command = "touch %s" % target self.failIf(os.path.exists(target)) peer = RemotePeer(name=name, remoteUser=remoteUser) peer.executeRemoteCommand(command) self.failUnless(os.path.exists(target)) ############################ # Test _buildCbackCommand() ############################ def testBuildCbackCommand_001(self): """ Test with None for cbackCommand and action, False for fullBackup. """ self.failUnlessRaises(ValueError, RemotePeer._buildCbackCommand, None, None, False) def testBuildCbackCommand_002(self): """ Test with None for cbackCommand, "collect" for action, False for fullBackup. """ result = RemotePeer._buildCbackCommand(None, "collect", False) self.failUnlessEqual("/usr/bin/cback collect", result) def testBuildCbackCommand_003(self): """ Test with "cback" for cbackCommand, "collect" for action, False for fullBackup. """ result = RemotePeer._buildCbackCommand("cback", "collect", False) self.failUnlessEqual("cback collect", result) def testBuildCbackCommand_004(self): """ Test with "cback" for cbackCommand, "collect" for action, True for fullBackup. """ result = RemotePeer._buildCbackCommand("cback", "collect", True) self.failUnlessEqual("cback --full collect", result) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): return unittest.TestSuite(( unittest.makeSuite(TestLocalPeer, 'test'), unittest.makeSuite(TestRemotePeer, 'test'), )) else: return unittest.TestSuite(( unittest.makeSuite(TestLocalPeer, 'test'), unittest.makeSuite(TestRemotePeer, 'testBasic'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/synctests.py0000664000175000017500000042477412560016766022035 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests Amazon S3 sync tool functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/tools/amazons3.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in tools/amazons3.py. Where possible, we test functions that print output by passing a custom file descriptor. Sometimes, we only ensure that a function or method runs without failure, and we don't validate what its result is or what it prints out. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a SYNCTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from getopt import GetoptError from CedarBackup2.testutil import failUnlessAssignRaises, captureOutput from CedarBackup2.tools.amazons3 import _usage, _version from CedarBackup2.tools.amazons3 import Options ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test simple functions ######################## def testSimpleFuncs_001(self): """ Test that the _usage() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_usage) def testSimpleFuncs_002(self): """ Test that the _version() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_version) #################### # TestOptions class #################### class TestOptions(unittest.TestCase): """Tests for the Options class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Options() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no arguments. """ options = Options() self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_002(self): """ Test constructor with validate=False, no other arguments. """ options = Options(validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_003(self): """ Test constructor with argumentList=[], validate=False. """ options = Options(argumentList=[], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_004(self): """ Test constructor with argumentString="", validate=False. """ options = Options(argumentString="", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_005(self): """ Test constructor with argumentList=["--help", ], validate=False. """ options = Options(argumentList=["--help", ], validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_006(self): """ Test constructor with argumentString="--help", validate=False. """ options = Options(argumentString="--help", validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_007(self): """ Test constructor with argumentList=["-h", ], validate=False. """ options = Options(argumentList=["-h", ], validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_008(self): """ Test constructor with argumentString="-h", validate=False. """ options = Options(argumentString="-h", validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_009(self): """ Test constructor with argumentList=["--version", ], validate=False. """ options = Options(argumentList=["--version", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_010(self): """ Test constructor with argumentString="--version", validate=False. """ options = Options(argumentString="--version", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_011(self): """ Test constructor with argumentList=["-V", ], validate=False. """ options = Options(argumentList=["-V", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_012(self): """ Test constructor with argumentString="-V", validate=False. """ options = Options(argumentString="-V", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_013(self): """ Test constructor with argumentList=["--verbose", ], validate=False. """ options = Options(argumentList=["--verbose", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_014(self): """ Test constructor with argumentString="--verbose", validate=False. """ options = Options(argumentString="--verbose", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_015(self): """ Test constructor with argumentList=["-b", ], validate=False. """ options = Options(argumentList=["-b", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_016(self): """ Test constructor with argumentString="-b", validate=False. """ options = Options(argumentString="-b", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_017(self): """ Test constructor with argumentList=["--quiet", ], validate=False. """ options = Options(argumentList=["--quiet", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_018(self): """ Test constructor with argumentString="--quiet", validate=False. """ options = Options(argumentString="--quiet", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_019(self): """ Test constructor with argumentList=["-q", ], validate=False. """ options = Options(argumentList=["-q", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_020(self): """ Test constructor with argumentString="-q", validate=False. """ options = Options(argumentString="-q", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_021(self): """ Test constructor with argumentList=["--logfile", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--logfile", ], validate=False) def testConstructor_022(self): """ Test constructor with argumentString="--logfile", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--logfile", validate=False) def testConstructor_023(self): """ Test constructor with argumentList=["-l", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-l", ], validate=False) def testConstructor_024(self): """ Test constructor with argumentString="-l", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-l", validate=False) def testConstructor_025(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=False. """ options = Options(argumentList=["--logfile", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_026(self): """ Test constructor with argumentString="--logfile something", validate=False. """ options = Options(argumentString="--logfile something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_027(self): """ Test constructor with argumentList=["-l", "something", ], validate=False. """ options = Options(argumentList=["-l", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_028(self): """ Test constructor with argumentString="-l something", validate=False. """ options = Options(argumentString="-l something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_029(self): """ Test constructor with argumentList=["--owner", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--owner", ], validate=False) def testConstructor_030(self): """ Test constructor with argumentString="--owner", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--owner", validate=False) def testConstructor_040(self): """ Test constructor with argumentList=["-o", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-o", ], validate=False) def testConstructor_041(self): """ Test constructor with argumentString="-o", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-o", validate=False) def testConstructor_042(self): """ Test constructor with argumentList=["--owner", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=False) def testConstructor_043(self): """ Test constructor with argumentString="--owner something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="--owner something", validate=False) def testConstructor_044(self): """ Test constructor with argumentList=["-o", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["-o", "something", ], validate=False) def testConstructor_045(self): """ Test constructor with argumentString="-o something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="-o something", validate=False) def testConstructor_046(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=False. """ options = Options(argumentList=["--owner", "a:b", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_047(self): """ Test constructor with argumentString="--owner a:b", validate=False. """ options = Options(argumentString="--owner a:b", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_048(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=False. """ options = Options(argumentList=["-o", "a:b", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_049(self): """ Test constructor with argumentString="-o a:b", validate=False. """ options = Options(argumentString="-o a:b", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_050(self): """ Test constructor with argumentList=["--mode", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--mode", ], validate=False) def testConstructor_051(self): """ Test constructor with argumentString="--mode", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--mode", validate=False) def testConstructor_052(self): """ Test constructor with argumentList=["-m", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-m", ], validate=False) def testConstructor_053(self): """ Test constructor with argumentString="-m", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-m", validate=False) def testConstructor_054(self): """ Test constructor with argumentList=["--mode", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=False) def testConstructor_055(self): """ Test constructor with argumentString="--mode something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="--mode something", validate=False) def testConstructor_056(self): """ Test constructor with argumentList=["-m", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["-m", "something", ], validate=False) def testConstructor_057(self): """ Test constructor with argumentString="-m something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="-m something", validate=False) def testConstructor_058(self): """ Test constructor with argumentList=["--mode", "631", ], validate=False. """ options = Options(argumentList=["--mode", "631", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_059(self): """ Test constructor with argumentString="--mode 631", validate=False. """ options = Options(argumentString="--mode 631", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_060(self): """ Test constructor with argumentList=["-m", "631", ], validate=False. """ options = Options(argumentList=["-m", "631", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_061(self): """ Test constructor with argumentString="-m 631", validate=False. """ options = Options(argumentString="-m 631", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_062(self): """ Test constructor with argumentList=["--output", ], validate=False. """ options = Options(argumentList=["--output", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_063(self): """ Test constructor with argumentString="--output", validate=False. """ options = Options(argumentString="--output", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_064(self): """ Test constructor with argumentList=["-O", ], validate=False. """ options = Options(argumentList=["-O", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_065(self): """ Test constructor with argumentString="-O", validate=False. """ options = Options(argumentString="-O", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_066(self): """ Test constructor with argumentList=["--debug", ], validate=False. """ options = Options(argumentList=["--debug", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_067(self): """ Test constructor with argumentString="--debug", validate=False. """ options = Options(argumentString="--debug", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_068(self): """ Test constructor with argumentList=["-d", ], validate=False. """ options = Options(argumentList=["-d", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_069(self): """ Test constructor with argumentString="-d", validate=False. """ options = Options(argumentString="-d", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_070(self): """ Test constructor with argumentList=["--stack", ], validate=False. """ options = Options(argumentList=["--stack", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_071(self): """ Test constructor with argumentString="--stack", validate=False. """ options = Options(argumentString="--stack", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_072(self): """ Test constructor with argumentList=["-s", ], validate=False. """ options = Options(argumentList=["-s", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_073(self): """ Test constructor with argumentString="-s", validate=False. """ options = Options(argumentString="-s", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_074(self): """ Test constructor with argumentList=["--diagnostics", ], validate=False. """ options = Options(argumentList=["--diagnostics", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_075(self): """ Test constructor with argumentString="--diagnostics", validate=False. """ options = Options(argumentString="--diagnostics", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_076(self): """ Test constructor with argumentList=["-D", ], validate=False. """ options = Options(argumentList=["-D", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_077(self): """ Test constructor with argumentString="-D", validate=False. """ options = Options(argumentString="-D", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_078(self): """ Test constructor with argumentList=["--verifyOnly", ], validate=False. """ options = Options(argumentList=["--verifyOnly", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(True, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_079(self): """ Test constructor with argumentString="--verifyOnly", validate=False. """ options = Options(argumentString="--verifyOnly", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(True, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_080(self): """ Test constructor with argumentList=["-v", ], validate=False. """ options = Options(argumentList=["-v", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(True, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_081(self): """ Test constructor with argumentString="-v", validate=False. """ options = Options(argumentString="-v", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(True, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_082(self): """ Test constructor with argumentList=["--ignoreWarnings", ], validate=False. """ options = Options(argumentList=["--ignoreWarnings", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(True, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_083(self): """ Test constructor with argumentString="--ignoreWarnings", validate=False. """ options = Options(argumentString="--ignoreWarnings", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(True, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_084(self): """ Test constructor with argumentList=["-w", ], validate=False. """ options = Options(argumentList=["-w", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(True, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_085(self): """ Test constructor with argumentString="-w", validate=False. """ options = Options(argumentString="-w", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(True, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_086(self): """ Test constructor with argumentList=["source", "bucket", ], validate=False. """ options = Options(argumentList=[ "source", "bucket", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) def testConstructor_087(self): """ Test constructor with argumentString="source bucket", validate=False. """ options = Options(argumentString="source bucket", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) def testConstructor_088(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "source", "bucket", ], validate=False. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "source", "bucket", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) def testConstructor_089(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 source bucket", validate=False. """ options = Options(argumentString="-d --verbose -O --mode 600 source bucket", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) def testConstructor_090(self): """ Test constructor with argumentList=[], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=[], validate=True) def testConstructor_091(self): """ Test constructor with argumentString="", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="", validate=True) def testConstructor_092(self): """ Test constructor with argumentList=["--help", ], validate=True. """ options = Options(argumentList=["--help", ], validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_093(self): """ Test constructor with argumentString="--help", validate=True. """ options = Options(argumentString="--help", validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_094(self): """ Test constructor with argumentList=["-h", ], validate=True. """ options = Options(argumentList=["-h", ], validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_095(self): """ Test constructor with argumentString="-h", validate=True. """ options = Options(argumentString="-h", validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_096(self): """ Test constructor with argumentList=["--version", ], validate=True. """ options = Options(argumentList=["--version", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_097(self): """ Test constructor with argumentString="--version", validate=True. """ options = Options(argumentString="--version", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_098(self): """ Test constructor with argumentList=["-V", ], validate=True. """ options = Options(argumentList=["-V", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_099(self): """ Test constructor with argumentString="-V", validate=True. """ options = Options(argumentString="-V", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_100(self): """ Test constructor with argumentList=["--verbose", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--verbose", ], validate=True) def testConstructor_101(self): """ Test constructor with argumentString="--verbose", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--verbose", validate=True) def testConstructor_102(self): """ Test constructor with argumentList=["-b", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-b", ], validate=True) def testConstructor_103(self): """ Test constructor with argumentString="-b", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-b", validate=True) def testConstructor_104(self): """ Test constructor with argumentList=["--quiet", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--quiet", ], validate=True) def testConstructor_105(self): """ Test constructor with argumentString="--quiet", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--quiet", validate=True) def testConstructor_106(self): """ Test constructor with argumentList=["-q", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-q", ], validate=True) def testConstructor_107(self): """ Test constructor with argumentString="-q", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-q", validate=True) def testConstructor_108(self): """ Test constructor with argumentList=["--logfile", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--logfile", ], validate=True) def testConstructor_109(self): """ Test constructor with argumentString="--logfile", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--logfile", validate=True) def testConstructor_110(self): """ Test constructor with argumentList=["-l", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-l", ], validate=True) def testConstructor_111(self): """ Test constructor with argumentString="-l", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-l", validate=True) def testConstructor_112(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--logfile", "something", ], validate=True) def testConstructor_113(self): """ Test constructor with argumentString="--logfile something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--logfile something", validate=True) def testConstructor_114(self): """ Test constructor with argumentList=["-l", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-l", "something", ], validate=True) def testConstructor_115(self): """ Test constructor with argumentString="-l something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-l something", validate=True) def testConstructor_116(self): """ Test constructor with argumentList=["--owner", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--owner", ], validate=True) def testConstructor_117(self): """ Test constructor with argumentString="--owner", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--owner", validate=True) def testConstructor_118(self): """ Test constructor with argumentList=["-o", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-o", ], validate=True) def testConstructor_119(self): """ Test constructor with argumentString="-o", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-o", validate=True) def testConstructor_120(self): """ Test constructor with argumentList=["--owner", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=True) def testConstructor_121(self): """ Test constructor with argumentString="--owner something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--owner something", validate=True) def testConstructor_122(self): """ Test constructor with argumentList=["-o", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-o", "something", ], validate=True) def testConstructor_123(self): """ Test constructor with argumentString="-o something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-o something", validate=True) def testConstructor_124(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--owner", "a:b", ], validate=True) def testConstructor_125(self): """ Test constructor with argumentString="--owner a:b", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--owner a:b", validate=True) def testConstructor_126(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-o", "a:b", ], validate=True) def testConstructor_127(self): """ Test constructor with argumentString="-o a:b", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-o a:b", validate=True) def testConstructor_128(self): """ Test constructor with argumentList=["--mode", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--mode", ], validate=True) def testConstructor_129(self): """ Test constructor with argumentString="--mode", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--mode", validate=True) def testConstructor_130(self): """ Test constructor with argumentList=["-m", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-m", ], validate=True) def testConstructor_131(self): """ Test constructor with argumentString="-m", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-m", validate=True) def testConstructor_132(self): """ Test constructor with argumentList=["--mode", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=True) def testConstructor_133(self): """ Test constructor with argumentString="--mode something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--mode something", validate=True) def testConstructor_134(self): """ Test constructor with argumentList=["-m", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-m", "something", ], validate=True) def testConstructor_135(self): """ Test constructor with argumentString="-m something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-m something", validate=True) def testConstructor_136(self): """ Test constructor with argumentList=["--mode", "631", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--mode", "631", ], validate=True) def testConstructor_137(self): """ Test constructor with argumentString="--mode 631", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--mode 631", validate=True) def testConstructor_138(self): """ Test constructor with argumentList=["-m", "631", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-m", "631", ], validate=True) def testConstructor_139(self): """ Test constructor with argumentString="-m 631", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-m 631", validate=True) def testConstructor_140(self): """ Test constructor with argumentList=["--output", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--output", ], validate=True) def testConstructor_141(self): """ Test constructor with argumentString="--output", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--output", validate=True) def testConstructor_142(self): """ Test constructor with argumentList=["-O", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-O", ], validate=True) def testConstructor_143(self): """ Test constructor with argumentString="-O", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-O", validate=True) def testConstructor_144(self): """ Test constructor with argumentList=["--debug", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--debug", ], validate=True) def testConstructor_145(self): """ Test constructor with argumentString="--debug", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--debug", validate=True) def testConstructor_146(self): """ Test constructor with argumentList=["-d", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-d", ], validate=True) def testConstructor_147(self): """ Test constructor with argumentString="-d", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-d", validate=True) def testConstructor_148(self): """ Test constructor with argumentList=["--stack", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--stack", ], validate=True) def testConstructor_149(self): """ Test constructor with argumentString="--stack", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--stack", validate=True) def testConstructor_150(self): """ Test constructor with argumentList=["-s", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-s", ], validate=True) def testConstructor_151(self): """ Test constructor with argumentString="-s", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-s", validate=True) def testConstructor_152(self): """ Test constructor with argumentList=["--diagnostics", ], validate=True. """ options = Options(argumentList=["--diagnostics", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_153(self): """ Test constructor with argumentString="--diagnostics", validate=True. """ options = Options(argumentString="--diagnostics", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_154(self): """ Test constructor with argumentList=["-D", ], validate=True. """ options = Options(argumentList=["-D", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_155(self): """ Test constructor with argumentString="-D", validate=True. """ options = Options(argumentString="-D", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual(None, options.sourceDir) self.failUnlessEqual(None, options.s3BucketUrl) def testConstructor_156(self): """ Test constructor with argumentList=["--verifyOnly", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--verifyOnly", ], validate=True) def testConstructor_157(self): """ Test constructor with argumentString="--verifyOnly", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--verifyOnly", validate=True) def testConstructor_158(self): """ Test constructor with argumentList=["-v", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-v", ], validate=True) def testConstructor_159(self): """ Test constructor with argumentString="-v", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-v", validate=True) def testConstructor_160(self): """ Test constructor with argumentList=["--ignoreWarnings", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--ignoreWarnings", ], validate=True) def testConstructor_161(self): """ Test constructor with argumentString="--ignoreWarnings", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--ignoreWarnings", validate=True) def testConstructor_162(self): """ Test constructor with argumentList=["-w", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-w", ], validate=True) def testConstructor_163(self): """ Test constructor with argumentString="-w", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-w", validate=True) def testConstructor_164(self): """ Test constructor with argumentList=["source", "bucket", ], validate=True. """ options = Options(argumentList=["source", "bucket", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) def testConstructor_165(self): """ Test constructor with argumentString="source bucket", validate=True. """ options = Options(argumentString="source bucket", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) def testConstructor_166(self): """ Test constructor with argumentList=["source", "bucket", ], validate=True. """ options = Options(argumentList=["source", "bucket", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) def testConstructor_167(self): """ Test constructor with argumentString="source bucket", validate=True. """ options = Options(argumentString="source bucket", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) def testConstructor_168(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "source", "bucket", ], validate=True. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "source", "bucket", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) def testConstructor_169(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 source bucket", validate=True. """ options = Options(argumentString="-d --verbose -O --mode 600 source bucket", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.verifyOnly) self.failUnlessEqual(False, options.ignoreWarnings) self.failUnlessEqual("source", options.sourceDir) self.failUnlessEqual("bucket", options.s3BucketUrl) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes at defaults. """ options1 = Options() options2 = Options() self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes filled in and same. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes filled in, help different. """ options1 = Options() options2 = Options() options1.help = False options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes filled in, version different. """ options1 = Options() options2 = Options() options1.help = True options1.version = False options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_005(self): """ Test comparison of two identical objects, all attributes filled in, verbose different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = False options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_006(self): """ Test comparison of two identical objects, all attributes filled in, quiet different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = False options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_007(self): """ Test comparison of two identical objects, all attributes filled in, logfile different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = None options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_008(self): """ Test comparison of two identical objects, all attributes filled in, owner different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = None options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_009(self): """ Test comparison of two identical objects, all attributes filled in, mode different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = None options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_010(self): """ Test comparison of two identical objects, all attributes filled in, output different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = False options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_011(self): """ Test comparison of two identical objects, all attributes filled in, debug different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = False options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_012(self): """ Test comparison of two identical objects, all attributes filled in, stacktrace different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_013(self): """ Test comparison of two identical objects, all attributes filled in, diagnostics different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = False options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_014(self): """ Test comparison of two identical objects, all attributes filled in, verifyOnly different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = False options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_015(self): """ Test comparison of two identical objects, all attributes filled in, sourceDir different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = None options1.s3BucketUrl = "bucket" options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_016(self): """ Test comparison of two identical objects, all attributes filled in, s3BucketUrl different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = True options1.diagnostics = True options1.verifyOnly = True options1.ignoreWarnings = True options1.sourceDir = "source" options1.s3BucketUrl = None options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = "631" options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = True options2.verifyOnly = True options2.ignoreWarnings = True options2.sourceDir = "source" options2.s3BucketUrl = "bucket" self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) ########################### # Test buildArgumentList() ########################### def testBuildArgumentList_001(self): """Test with no values set, validate=False.""" options = Options() argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual([], argumentList) def testBuildArgumentList_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--help", ], argumentList) def testBuildArgumentList_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--version", ], argumentList) def testBuildArgumentList_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--verbose", ], argumentList) def testBuildArgumentList_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--quiet", ], argumentList) def testBuildArgumentList_006(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--logfile", "bogus", ], argumentList) def testBuildArgumentList_007(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--owner", "ken:group", ], argumentList) def testBuildArgumentList_008(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0644 argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--mode", "644", ], argumentList) def testBuildArgumentList_009(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--output", ], argumentList) def testBuildArgumentList_010(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--debug", ], argumentList) def testBuildArgumentList_011(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--stack", ], argumentList) def testBuildArgumentList_012(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--diagnostics", ], argumentList) def testBuildArgumentList_013(self): """Test with verifyOnly set, validate=False.""" options = Options() options.verifyOnly = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--verifyOnly", ], argumentList) def testBuildArgumentList_014(self): """Test with ignoreWarnings set, validate=False.""" options = Options() options.ignoreWarnings = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--ignoreWarnings", ], argumentList) def testBuildArgumentList_015(self): """Test with valid source and target, validate=False.""" options = Options() options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["source", "bucket", ], argumentList) def testBuildArgumentList_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.verifyOnly = True options.ignoreWarnings = True options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "--verifyOnly", "--ignoreWarnings", "source", "bucket", ], argumentList) def testBuildArgumentList_017(self): """Test with no values set, validate=True.""" options = Options() self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_018(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--help", ], argumentList) def testBuildArgumentList_019(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--version", ], argumentList) def testBuildArgumentList_020(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_021(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_022(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_023(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_024(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0644 self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_025(self): """Test with output set, validate=True.""" options = Options() options.output = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_026(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_027(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_028(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--diagnostics", ], argumentList) def testBuildArgumentList_029(self): """Test with verifyOnly set, validate=True.""" options = Options() options.verifyOnly = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_030(self): """Test with ignoreWarnings set, validate=True.""" options = Options() options.ignoreWarnings = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_031(self): """Test with valid source and target, validate=True.""" options = Options() options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["source", "bucket", ], argumentList) def testBuildArgumentList_032(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.verifyOnly = True options.ignoreWarnings = True options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "--verifyOnly", "--ignoreWarnings", "source", "bucket", ], argumentList) ############################# # Test buildArgumentString() ############################# def testBuildArgumentString_001(self): """Test with no values set, validate=False.""" options = Options() argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("", argumentString) def testBuildArgumentString_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--help ", argumentString) def testBuildArgumentString_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--version ", argumentString) def testBuildArgumentString_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--verbose ", argumentString) def testBuildArgumentString_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--quiet ", argumentString) def testBuildArgumentString_006(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--logfile "bogus" ', argumentString) def testBuildArgumentString_007(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--owner "ken:group" ', argumentString) def testBuildArgumentString_008(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0644 argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--mode 644 ', argumentString) def testBuildArgumentString_009(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--output ", argumentString) def testBuildArgumentString_010(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--debug ", argumentString) def testBuildArgumentString_011(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--stack ", argumentString) def testBuildArgumentString_012(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--diagnostics ", argumentString) def testBuildArgumentString_013(self): """Test with verifyOnly set, validate=False.""" options = Options() options.verifyOnly = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--verifyOnly ", argumentString) def testBuildArgumentString_014(self): """Test with ignoreWarnings set, validate=False.""" options = Options() options.ignoreWarnings = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--ignoreWarnings ", argumentString) def testBuildArgumentString_015(self): """Test with valid source and target, validate=False.""" options = Options() options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('"source" "bucket" ', argumentString) def testBuildArgumentString_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.verifyOnly = True options.ignoreWarnings = True options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--help --version --verbose --quiet --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics --verifyOnly --ignoreWarnings "source" "bucket" ', argumentString) def testBuildArgumentString_017(self): """Test with no values set, validate=True.""" options = Options() self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_018(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual("--help ", argumentString) def testBuildArgumentString_019(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual("--version ", argumentString) def testBuildArgumentString_020(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_021(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_022(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_023(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_024(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0644 self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_025(self): """Test with output set, validate=True.""" options = Options() options.output = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_026(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_027(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_028(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual("--diagnostics ", argumentString) def testBuildArgumentString_029(self): """Test with verifyOnly set, validate=True.""" options = Options() options.verifyOnly = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_030(self): """Test with ignoreWarnings set, validate=True.""" options = Options() options.ignoreWarnings = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_031(self): """Test with valid source and target, validate=True.""" options = Options() options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('"source" "bucket" ', argumentString) def testBuildArgumentString_032(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.verifyOnly = True options.ignoreWarnings = True options.sourceDir = "source" options.s3BucketUrl = "bucket" argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('--help --version --verbose --quiet --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics --verifyOnly --ignoreWarnings "source" "bucket" ', argumentString) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), unittest.makeSuite(TestOptions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/postgresqltests.py0000664000175000017500000011343012642026033023231 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2006,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests PostgreSQL extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/postgresql.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/postgresql.py. There are also tests for several of the private methods. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to PostgreSQL, since the actual dump would need to have access to a real database. Because of this, there aren't any tests below that actually talk to a database. As a compromise, I test some of the private methods in the implementation. Normally, I don't like to test private methods, but in this case, testing the private methods will help give us some reasonable confidence in the code even if we can't talk to a database.. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a POSTGRESQLTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.postgresql import LocalConfig, PostgresqlConfig ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "postgresql.conf.1", "postgresql.conf.2", "postgresql.conf.3", "postgresql.conf.4", "postgresql.conf.5", ] ####################################################################### # Test Case Classes ####################################################################### ############################# # TestPostgresqlConfig class ############################# class TestPostgresqlConfig(unittest.TestCase): """Tests for the PostgresqlConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PostgresqlConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.user) self.failUnlessEqual(None, postgresql.compressMode) self.failUnlessEqual(False, postgresql.all) self.failUnlessEqual(None, postgresql.databases) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, databases=None. """ postgresql = PostgresqlConfig("user", "none", False, None) self.failUnlessEqual("user", postgresql.user) self.failUnlessEqual("none", postgresql.compressMode) self.failUnlessEqual(False, postgresql.all) self.failUnlessEqual(None, postgresql.databases) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no databases. """ postgresql = PostgresqlConfig("user", "none", True, []) self.failUnlessEqual("user", postgresql.user) self.failUnlessEqual("none", postgresql.compressMode) self.failUnlessEqual(True, postgresql.all) self.failUnlessEqual([], postgresql.databases) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one database. """ postgresql = PostgresqlConfig("user", "gzip", True, [ "one", ]) self.failUnlessEqual("user", postgresql.user) self.failUnlessEqual("gzip", postgresql.compressMode) self.failUnlessEqual(True, postgresql.all) self.failUnlessEqual([ "one", ], postgresql.databases) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with multiple databases. """ postgresql = PostgresqlConfig("user", "bzip2", True, [ "one", "two", ]) self.failUnlessEqual("user", postgresql.user) self.failUnlessEqual("bzip2", postgresql.compressMode) self.failUnlessEqual(True, postgresql.all) self.failUnlessEqual([ "one", "two", ], postgresql.databases) def testConstructor_006(self): """ Test assignment of user attribute, None value. """ postgresql = PostgresqlConfig(user="user") self.failUnlessEqual("user", postgresql.user) postgresql.user = None self.failUnlessEqual(None, postgresql.user) def testConstructor_007(self): """ Test assignment of user attribute, valid value. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.user) postgresql.user = "user" self.failUnlessEqual("user", postgresql.user) def testConstructor_008(self): """ Test assignment of user attribute, invalid value (empty). """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.user) self.failUnlessAssignRaises(ValueError, postgresql, "user", "") self.failUnlessEqual(None, postgresql.user) def testConstructor_009(self): """ Test assignment of compressMode attribute, None value. """ postgresql = PostgresqlConfig(compressMode="none") self.failUnlessEqual("none", postgresql.compressMode) postgresql.compressMode = None self.failUnlessEqual(None, postgresql.compressMode) def testConstructor_010(self): """ Test assignment of compressMode attribute, valid value. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.compressMode) postgresql.compressMode = "none" self.failUnlessEqual("none", postgresql.compressMode) postgresql.compressMode = "gzip" self.failUnlessEqual("gzip", postgresql.compressMode) postgresql.compressMode = "bzip2" self.failUnlessEqual("bzip2", postgresql.compressMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, invalid value (empty). """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.compressMode) self.failUnlessAssignRaises(ValueError, postgresql, "compressMode", "") self.failUnlessEqual(None, postgresql.compressMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.compressMode) self.failUnlessAssignRaises(ValueError, postgresql, "compressMode", "bogus") self.failUnlessEqual(None, postgresql.compressMode) def testConstructor_013(self): """ Test assignment of all attribute, None value. """ postgresql = PostgresqlConfig(all=True) self.failUnlessEqual(True, postgresql.all) postgresql.all = None self.failUnlessEqual(False, postgresql.all) def testConstructor_014(self): """ Test assignment of all attribute, valid value (real boolean). """ postgresql = PostgresqlConfig() self.failUnlessEqual(False, postgresql.all) postgresql.all = True self.failUnlessEqual(True, postgresql.all) postgresql.all = False self.failUnlessEqual(False, postgresql.all) #pylint: disable=R0204 def testConstructor_015(self): """ Test assignment of all attribute, valid value (expression). """ postgresql = PostgresqlConfig() self.failUnlessEqual(False, postgresql.all) postgresql.all = 0 self.failUnlessEqual(False, postgresql.all) postgresql.all = [] self.failUnlessEqual(False, postgresql.all) postgresql.all = None self.failUnlessEqual(False, postgresql.all) postgresql.all = ['a'] self.failUnlessEqual(True, postgresql.all) postgresql.all = 3 self.failUnlessEqual(True, postgresql.all) def testConstructor_016(self): """ Test assignment of databases attribute, None value. """ postgresql = PostgresqlConfig(databases=[]) self.failUnlessEqual([], postgresql.databases) postgresql.databases = None self.failUnlessEqual(None, postgresql.databases) def testConstructor_017(self): """ Test assignment of databases attribute, [] value. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) postgresql.databases = [] self.failUnlessEqual([], postgresql.databases) def testConstructor_018(self): """ Test assignment of databases attribute, single valid entry. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) postgresql.databases = ["/whatever", ] self.failUnlessEqual(["/whatever", ], postgresql.databases) postgresql.databases.append("/stuff") self.failUnlessEqual(["/whatever", "/stuff", ], postgresql.databases) def testConstructor_019(self): """ Test assignment of databases attribute, multiple valid entries. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) postgresql.databases = ["/whatever", "/stuff", ] self.failUnlessEqual(["/whatever", "/stuff", ], postgresql.databases) postgresql.databases.append("/etc/X11") self.failUnlessEqual(["/whatever", "/stuff", "/etc/X11", ], postgresql.databases) def testConstructor_020(self): """ Test assignment of databases attribute, single invalid entry (empty). """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) self.failUnlessAssignRaises(ValueError, postgresql, "databases", ["", ]) self.failUnlessEqual(None, postgresql.databases) def testConstructor_021(self): """ Test assignment of databases attribute, mixed valid and invalid entries. """ postgresql = PostgresqlConfig() self.failUnlessEqual(None, postgresql.databases) self.failUnlessAssignRaises(ValueError, postgresql, "databases", ["good", "", "alsogood", ]) self.failUnlessEqual(None, postgresql.databases) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig() self.failUnlessEqual(postgresql1, postgresql2) self.failUnless(postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(postgresql1 >= postgresql2) self.failUnless(not postgresql1 != postgresql2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, list None. """ postgresql1 = PostgresqlConfig("user", "gzip", True, None) postgresql2 = PostgresqlConfig("user", "gzip", True, None) self.failUnlessEqual(postgresql1, postgresql2) self.failUnless(postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(postgresql1 >= postgresql2) self.failUnless(not postgresql1 != postgresql2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, list empty. """ postgresql1 = PostgresqlConfig("user", "bzip2", True, []) postgresql2 = PostgresqlConfig("user", "bzip2", True, []) self.failUnlessEqual(postgresql1, postgresql2) self.failUnless(postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(postgresql1 >= postgresql2) self.failUnless(not postgresql1 != postgresql2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, list non-empty. """ postgresql1 = PostgresqlConfig("user", "none", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "none", True, [ "whatever", ]) self.failUnlessEqual(postgresql1, postgresql2) self.failUnless(postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(postgresql1 >= postgresql2) self.failUnless(not postgresql1 != postgresql2) def testComparison_005(self): """ Test comparison of two differing objects, user differs (one None). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(user="user") self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_006(self): """ Test comparison of two differing objects, user differs. """ postgresql1 = PostgresqlConfig("user1", "gzip", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user2", "gzip", True, [ "whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(compressMode="gzip") self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ postgresql1 = PostgresqlConfig("user", "bzip2", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_009(self): """ Test comparison of two differing objects, all differs (one None). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(all=True) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_010(self): """ Test comparison of two differing objects, all differs. """ postgresql1 = PostgresqlConfig("user", "gzip", False, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_011(self): """ Test comparison of two differing objects, databases differs (one None, one empty). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(databases=[]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_012(self): """ Test comparison of two differing objects, databases differs (one None, one not empty). """ postgresql1 = PostgresqlConfig() postgresql2 = PostgresqlConfig(databases=["whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_013(self): """ Test comparison of two differing objects, databases differs (one empty, one not empty). """ postgresql1 = PostgresqlConfig("user", "gzip", True, [ ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(postgresql1 < postgresql2) self.failUnless(postgresql1 <= postgresql2) self.failUnless(not postgresql1 > postgresql2) self.failUnless(not postgresql1 >= postgresql2) self.failUnless(postgresql1 != postgresql2) def testComparison_014(self): """ Test comparison of two differing objects, databases differs (both not empty). """ postgresql1 = PostgresqlConfig("user", "gzip", True, [ "whatever", ]) postgresql2 = PostgresqlConfig("user", "gzip", True, [ "whatever", "bogus", ]) self.failIfEqual(postgresql1, postgresql2) self.failUnless(not postgresql1 == postgresql2) self.failUnless(not postgresql1 < postgresql2) # note: different than standard due to unsorted list self.failUnless(not postgresql1 <= postgresql2) # note: different than standard due to unsorted list self.failUnless(postgresql1 > postgresql2) # note: different than standard due to unsorted list self.failUnless(postgresql1 >= postgresql2) # note: different than standard due to unsorted list self.failUnless(postgresql1 != postgresql2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the postgresql configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.postgresql) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.postgresql) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["postgresql.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of postgresql attribute, None value. """ config = LocalConfig() config.postgresql = None self.failUnlessEqual(None, config.postgresql) def testConstructor_005(self): """ Test assignment of postgresql attribute, valid value. """ config = LocalConfig() config.postgresql = PostgresqlConfig() self.failUnlessEqual(PostgresqlConfig(), config.postgresql) def testConstructor_006(self): """ Test assignment of postgresql attribute, invalid value (not PostgresqlConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "postgresql", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.postgresql = PostgresqlConfig() config2 = LocalConfig() config2.postgresql = PostgresqlConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, postgresql differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.postgresql = PostgresqlConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, postgresql differs. """ config1 = LocalConfig() config1.postgresql = PostgresqlConfig(user="one") config2 = LocalConfig() config2.postgresql = PostgresqlConfig(user="two") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None postgresql section. """ config = LocalConfig() config.postgresql = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty postgresql section. """ config = LocalConfig() config.postgresql = PostgresqlConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty postgresql section, all=True, databases=None. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", True, None) config.validate() def testValidate_004(self): """ Test validate on a non-empty postgresql section, all=True, empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "none", True, []) config.validate() def testValidate_005(self): """ Test validate on a non-empty postgresql section, all=True, non-empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", True, ["whatever", ]) self.failUnlessRaises(ValueError, config.validate) def testValidate_006(self): """ Test validate on a non-empty postgresql section, all=False, databases=None. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", False, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_007(self): """ Test validate on a non-empty postgresql section, all=False, empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", False, []) self.failUnlessRaises(ValueError, config.validate) def testValidate_008(self): """ Test validate on a non-empty postgresql section, all=False, non-empty databases. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", False, ["whatever", ]) config.validate() def testValidate_009(self): """ Test validate on a non-empty postgresql section, with user=None. """ config = LocalConfig() config.postgresql = PostgresqlConfig(None, "gzip", True, None) config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["postgresql.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.postgresql) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.postgresql) def testParse_003(self): """ Parse config document containing only a postgresql section, no databases, all=True. """ path = self.resources["postgresql.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("none", config.postgresql.compressMode) self.failUnlessEqual(True, config.postgresql.all) self.failUnlessEqual(None, config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("none", config.postgresql.compressMode) self.failUnlessEqual(True, config.postgresql.all) self.failUnlessEqual(None, config.postgresql.databases) def testParse_004(self): """ Parse config document containing only a postgresql section, single database, all=False. """ path = self.resources["postgresql.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("gzip", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database", ], config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("gzip", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database", ], config.postgresql.databases) def testParse_005(self): """ Parse config document containing only a postgresql section, multiple databases, all=False. """ path = self.resources["postgresql.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("bzip2", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database1", "database2", ], config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual("user", config.postgresql.user) self.failUnlessEqual("bzip2", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database1", "database2", ], config.postgresql.databases) def testParse_006(self): """ Parse config document containing only a postgresql section, no user, multiple databases, all=False. """ path = self.resources["postgresql.conf.5"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual(None, config.postgresql.user) self.failUnlessEqual("bzip2", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database1", "database2", ], config.postgresql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.postgresql) self.failUnlessEqual(None, config.postgresql.user) self.failUnlessEqual("bzip2", config.postgresql.compressMode) self.failUnlessEqual(False, config.postgresql.all) self.failUnlessEqual(["database1", "database2", ], config.postgresql.databases) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document """ config = LocalConfig() self.validateAddConfig(config) def testAddConfig_003(self): """ Test with no databases, all other values filled in, all=True. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "none", True, None) self.validateAddConfig(config) def testAddConfig_004(self): """ Test with no databases, all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", False, None) self.validateAddConfig(config) def testAddConfig_005(self): """ Test with single database, all other values filled in, all=True. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", True, [ "database", ]) self.validateAddConfig(config) def testAddConfig_006(self): """ Test with single database, all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "none", False, [ "database", ]) self.validateAddConfig(config) def testAddConfig_007(self): """ Test with multiple databases, all other values filled in, all=True. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "bzip2", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_008(self): """ Test with multiple databases, all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig("user", "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_009(self): """ Test with multiple databases, user=None but all other values filled in, all=False. """ config = LocalConfig() config.postgresql = PostgresqlConfig(None, "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestPostgresqlConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/splittests.py0000664000175000017500000013216012560016766022175 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests split extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/split.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/split.py. There are also tests for some of the private functions. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set SPLITTESTS_FULL to "Y" in the environment. In this module, the primary dependency is that the split utility must be available. There is also one test that wants at least one non-English locale (fr_FR, ru_RU or pt_PT) available to check localization issues (but that test will just automatically be skipped if such a locale is not available). @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest import os import tempfile # Cedar Backup modules from CedarBackup2.util import UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar from CedarBackup2.testutil import failUnlessAssignRaises, availableLocales from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.split import LocalConfig, SplitConfig, ByteQuantity from CedarBackup2.extend.split import _splitFile, _splitDailyDir ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "split.conf.1", "split.conf.2", "split.conf.3", "split.conf.4", "split.conf.5", "tree21.tar.gz", ] INVALID_PATH = "bogus" # This path name should never exist ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "SPLITTESTS_FULL" in os.environ: return os.environ["SPLITTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ########################## # TestSplitConfig class ########################## class TestSplitConfig(unittest.TestCase): """Tests for the SplitConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = SplitConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ split = SplitConfig() self.failUnlessEqual(None, split.sizeLimit) self.failUnlessEqual(None, split.splitSize) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ split = SplitConfig(ByteQuantity("1.0", UNIT_BYTES), ByteQuantity("2.0", UNIT_KBYTES)) self.failUnlessEqual(ByteQuantity("1.0", UNIT_BYTES), split.sizeLimit) self.failUnlessEqual(ByteQuantity("2.0", UNIT_KBYTES), split.splitSize) def testConstructor_003(self): """ Test assignment of sizeLimit attribute, None value. """ split = SplitConfig(sizeLimit=ByteQuantity("1.0", UNIT_BYTES)) self.failUnlessEqual(ByteQuantity("1.0", UNIT_BYTES), split.sizeLimit) split.sizeLimit = None self.failUnlessEqual(None, split.sizeLimit) def testConstructor_004(self): """ Test assignment of sizeLimit attribute, valid value. """ split = SplitConfig() self.failUnlessEqual(None, split.sizeLimit) split.sizeLimit = ByteQuantity("1.0", UNIT_BYTES) self.failUnlessEqual(ByteQuantity("1.0", UNIT_BYTES), split.sizeLimit) def testConstructor_005(self): """ Test assignment of sizeLimit attribute, invalid value (empty). """ split = SplitConfig() self.failUnlessEqual(None, split.sizeLimit) self.failUnlessAssignRaises(ValueError, split, "sizeLimit", "") self.failUnlessEqual(None, split.sizeLimit) def testConstructor_006(self): """ Test assignment of sizeLimit attribute, invalid value (not a ByteQuantity). """ split = SplitConfig() self.failUnlessEqual(None, split.sizeLimit) self.failUnlessAssignRaises(ValueError, split, "sizeLimit", "1.0 GB") self.failUnlessEqual(None, split.sizeLimit) def testConstructor_007(self): """ Test assignment of splitSize attribute, None value. """ split = SplitConfig(splitSize=ByteQuantity("1.00", UNIT_KBYTES)) self.failUnlessEqual(ByteQuantity("1.00", UNIT_KBYTES), split.splitSize) split.splitSize = None self.failUnlessEqual(None, split.splitSize) def testConstructor_008(self): """ Test assignment of splitSize attribute, valid value. """ split = SplitConfig() self.failUnlessEqual(None, split.splitSize) split.splitSize = ByteQuantity("1.00", UNIT_KBYTES) self.failUnlessEqual(ByteQuantity("1.00", UNIT_KBYTES), split.splitSize) def testConstructor_009(self): """ Test assignment of splitSize attribute, invalid value (empty). """ split = SplitConfig() self.failUnlessEqual(None, split.splitSize) self.failUnlessAssignRaises(ValueError, split, "splitSize", "") self.failUnlessEqual(None, split.splitSize) def testConstructor_010(self): """ Test assignment of splitSize attribute, invalid value (not a ByteQuantity). """ split = SplitConfig() self.failUnlessEqual(None, split.splitSize) self.failUnlessAssignRaises(ValueError, split, "splitSize", 12) self.failUnlessEqual(None, split.splitSize) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ split1 = SplitConfig() split2 = SplitConfig() self.failUnlessEqual(split1, split2) self.failUnless(split1 == split2) self.failUnless(not split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(split1 >= split2) self.failUnless(not split1 != split2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ split1 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) split2 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) self.failUnlessEqual(split1, split2) self.failUnless(split1 == split2) self.failUnless(not split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(split1 >= split2) self.failUnless(not split1 != split2) def testComparison_003(self): """ Test comparison of two differing objects, sizeLimit differs (one None). """ split1 = SplitConfig() split2 = SplitConfig(sizeLimit=ByteQuantity("99", UNIT_KBYTES)) self.failIfEqual(split1, split2) self.failUnless(not split1 == split2) self.failUnless(split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(not split1 >= split2) self.failUnless(split1 != split2) def testComparison_004(self): """ Test comparison of two differing objects, sizeLimit differs. """ split1 = SplitConfig(ByteQuantity("99", UNIT_BYTES), ByteQuantity("1.00", UNIT_MBYTES)) split2 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(split1, split2) self.failUnless(not split1 == split2) self.failUnless(split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(not split1 >= split2) self.failUnless(split1 != split2) def testComparison_005(self): """ Test comparison of two differing objects, splitSize differs (one None). """ split1 = SplitConfig() split2 = SplitConfig(splitSize=ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(split1, split2) self.failUnless(not split1 == split2) self.failUnless(split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(not split1 >= split2) self.failUnless(split1 != split2) def testComparison_006(self): """ Test comparison of two differing objects, splitSize differs. """ split1 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("0.5", UNIT_MBYTES)) split2 = SplitConfig(ByteQuantity("99", UNIT_KBYTES), ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(split1, split2) self.failUnless(not split1 == split2) self.failUnless(split1 < split2) self.failUnless(split1 <= split2) self.failUnless(not split1 > split2) self.failUnless(not split1 >= split2) self.failUnless(split1 != split2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the split configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.split) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.split) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["split.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of split attribute, None value. """ config = LocalConfig() config.split = None self.failUnlessEqual(None, config.split) def testConstructor_005(self): """ Test assignment of split attribute, valid value. """ config = LocalConfig() config.split = SplitConfig() self.failUnlessEqual(SplitConfig(), config.split) def testConstructor_006(self): """ Test assignment of split attribute, invalid value (not SplitConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "split", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.split = SplitConfig() config2 = LocalConfig() config2.split = SplitConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, split differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.split = SplitConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, split differs. """ config1 = LocalConfig() config1.split = SplitConfig(sizeLimit=ByteQuantity("0.1", UNIT_MBYTES)) config2 = LocalConfig() config2.split = SplitConfig(sizeLimit=ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None split section. """ config = LocalConfig() config.split = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty split section. """ config = LocalConfig() config.split = SplitConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty split section with no values filled in. """ config = LocalConfig() config.split = SplitConfig(None, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty split section with only one value filled in. """ config = LocalConfig() config.split = SplitConfig(ByteQuantity("1.00", UNIT_MBYTES), None) self.failUnlessRaises(ValueError, config.validate) config.split = SplitConfig(None, ByteQuantity("1.00", UNIT_MBYTES)) self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty split section with valid values filled in. """ config = LocalConfig() config.split = SplitConfig(ByteQuantity("1.00", UNIT_MBYTES), ByteQuantity("1.00", UNIT_MBYTES)) config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["split.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.split) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.split) def testParse_002(self): """ Parse config document with filled-in values, size in bytes. """ path = self.resources["split.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("12345", UNIT_BYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("67890.0", UNIT_BYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("12345", UNIT_BYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("67890.0", UNIT_BYTES), config.split.splitSize) def testParse_003(self): """ Parse config document with filled-in values, size in KB. """ path = self.resources["split.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_KBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_KBYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_KBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_KBYTES), config.split.splitSize) def testParse_004(self): """ Parse config document with filled-in values, size in MB. """ path = self.resources["split.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_MBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_MBYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_MBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_MBYTES), config.split.splitSize) def testParse_005(self): """ Parse config document with filled-in values, size in GB. """ path = self.resources["split.conf.5"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_GBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_GBYTES), config.split.splitSize) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.split) self.failUnlessEqual(ByteQuantity("1.25", UNIT_GBYTES), config.split.sizeLimit) self.failUnlessEqual(ByteQuantity("0.6", UNIT_GBYTES), config.split.splitSize) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ split = SplitConfig() config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_002(self): """ Test with values set, byte values. """ split = SplitConfig(ByteQuantity("57521.0", UNIT_BYTES), ByteQuantity("121231", UNIT_BYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_003(self): """ Test with values set, KB values. """ split = SplitConfig(ByteQuantity("12", UNIT_KBYTES), ByteQuantity("63352", UNIT_KBYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_004(self): """ Test with values set, MB values. """ split = SplitConfig(ByteQuantity("12", UNIT_MBYTES), ByteQuantity("63352", UNIT_MBYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) def testAddConfig_005(self): """ Test with values set, GB values. """ split = SplitConfig(ByteQuantity("12", UNIT_GBYTES), ByteQuantity("63352", UNIT_GBYTES)) config = LocalConfig() config.split = split self.validateAddConfig(config) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the functions in split.py.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def checkSplit(self, sourcePath, origSize, splitSize): """Checks that a file was split properly.""" wholeFiles = int(float(origSize) / float(splitSize)) leftoverBytes = int(float(origSize) % float(splitSize)) for i in range(0, wholeFiles): splitPath = "%s_%05d" % (sourcePath, i) self.failUnless(os.path.exists(splitPath)) self.failUnlessEqual(splitSize, os.stat(splitPath).st_size) if leftoverBytes > 0: splitPath = "%s_%05d" % (sourcePath, wholeFiles) self.failUnless(os.path.exists(splitPath)) self.failUnlessEqual(leftoverBytes, os.stat(splitPath).st_size) def findBadLocale(self): """ The split command localizes its output for certain locales. This breaks the parsing code in split.py. This method returns a list of the locales (if any) that are currently configured which could be expected to cause a failure if the localization-fixing code doesn't work. """ locales = availableLocales() if 'fr_FR' in locales: return 'fr_FR' if 'pl_PL' in locales: return 'pl_PL' if 'ru_RU' in locales: return 'ru_RU' return None #################### # Test _splitFile() #################### def testSplitFile_001(self): """ Test with a nonexistent file. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", INVALID_PATH ]) self.failIf(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) self.failUnlessRaises(ValueError, _splitFile, sourcePath, splitSize, None, None, removeSource=False) def testSplitFile_002(self): """ Test with integer split size, removeSource=False. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.failUnless(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=False) self.failUnless(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) def testSplitFile_003(self): """ Test with floating point split size, removeSource=False. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.failUnless(os.path.exists(sourcePath)) splitSize = ByteQuantity("320.1", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=False) self.failUnless(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) def testSplitFile_004(self): """ Test with integer split size, removeSource=True. """ self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.failUnless(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=True) self.failIf(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) def testSplitFile_005(self): """ Test with a local other than "C" or "en_US" set. """ locale = self.findBadLocale() if locale is not None: os.environ["LANG"] = locale os.environ["LC_ADDRESS"] = locale os.environ["LC_ALL"] = locale os.environ["LC_COLLATE"] = locale os.environ["LC_CTYPE"] = locale os.environ["LC_IDENTIFICATION"] = locale os.environ["LC_MEASUREMENT"] = locale os.environ["LC_MESSAGES"] = locale os.environ["LC_MONETARY"] = locale os.environ["LC_NAME"] = locale os.environ["LC_NUMERIC"] = locale os.environ["LC_PAPER"] = locale os.environ["LC_TELEPHONE"] = locale os.environ["LC_TIME"] = locale self.extractTar("tree21") sourcePath = self.buildPath(["tree21", "2007", "01", "01", "system1", "file001.a.b", ]) self.failUnless(os.path.exists(sourcePath)) splitSize = ByteQuantity("320", UNIT_BYTES) _splitFile(sourcePath, splitSize, None, None, removeSource=True) self.failIf(os.path.exists(sourcePath)) self.checkSplit(sourcePath, 3200, 320) ########################## # Test _splitDailyDir() ########################## def testSplitDailyDir_001(self): """ Test with a nonexistent daily staging directory. """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", INVALID_PATH, ]) self.failIf(os.path.exists(dailyDir)) sizeLimit = ByteQuantity("1.0", UNIT_MBYTES) splitSize = ByteQuantity("100000", UNIT_BYTES) self.failUnlessRaises(ValueError, _splitDailyDir, dailyDir, sizeLimit, splitSize, None, None) def testSplitDailyDir_002(self): """ Test with 1.0 MB limit. """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("1.0", UNIT_MBYTES) splitSize = ByteQuantity("100000", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) def testSplitDailyDir_003(self): """ Test with 100,000 byte limit, chopped down to 10 KB """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("100000", UNIT_BYTES) splitSize = ByteQuantity("10", UNIT_KBYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 10*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 10*1024) def testSplitDailyDir_004(self): """ Test with 99,999 byte limit, chopped down to 5,000 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("99999", UNIT_BYTES) splitSize = ByteQuantity("5000", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 5000) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 5000) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 5000) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 5000) def testSplitDailyDir_005(self): """ Test with 99,998 byte limit, chopped down to 2500 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("10000.0", UNIT_BYTES) splitSize = ByteQuantity("2500", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file002"), 32000, 2500) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 2500) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 2500) self.checkSplit(os.path.join(dailyDir, "system3", "file001"), 99999, 2500) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 2500) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 2500) def testSplitDailyDir_006(self): """ Test with 10,000 byte limit, chopped down to 1024 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("10000", UNIT_BYTES) splitSize = ByteQuantity("1.0", UNIT_KBYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file002"), 32000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file001"), 99999, 1*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 1*1024) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 1*1024) def testSplitDailyDir_007(self): """ Test with 9,999 byte limit, chopped down to 1000 bytes """ self.extractTar("tree21") dailyDir = self.buildPath(["tree21", "2007", "01", "01", ]) self.failUnless(os.path.exists(dailyDir) and os.path.isdir(dailyDir)) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) sizeLimit = ByteQuantity("9999", UNIT_BYTES) splitSize = ByteQuantity("1000", UNIT_BYTES) _splitDailyDir(dailyDir, sizeLimit, splitSize, None, None) self.failUnless(os.path.exists(os.path.join(dailyDir, "system1", "file001.a.b"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system1", "file003"))) self.failUnless(os.path.exists(os.path.join(dailyDir, "system2", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system2", "file003"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file001"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file002"))) self.failIf(os.path.exists(os.path.join(dailyDir, "system3", "file003"))) self.checkSplit(os.path.join(dailyDir, "system1", "file002"), 32000, 1000) self.checkSplit(os.path.join(dailyDir, "system1", "file003"), 320000, 1000) self.checkSplit(os.path.join(dailyDir, "system2", "file002"), 10000, 1000) self.checkSplit(os.path.join(dailyDir, "system2", "file003"), 100000, 1000) self.checkSplit(os.path.join(dailyDir, "system3", "file001"), 99999, 1000) self.checkSplit(os.path.join(dailyDir, "system3", "file002"), 100000, 1000) self.checkSplit(os.path.join(dailyDir, "system3", "file003"), 100001, 1000) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): return unittest.TestSuite(( unittest.makeSuite(TestSplitConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), unittest.makeSuite(TestFunctions, 'test'), )) else: return unittest.TestSuite(( unittest.makeSuite(TestSplitConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/configtests.py0000664000175000017500000176671612642026234022324 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2011 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests configuration functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/config.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in config.py. I usually prefer to test only the public interface to a class, because that way the regression tests don't depend on the internal implementation. In this case, I've decided to test some of the private methods, because their "privateness" is more a matter of presenting a clean external interface than anything else. In particular, this is the case with the private validation functions (I use the private functions so I can test just the validations for one specific case, even if the public interface only exposes one broad validation). Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract the XML and then feed it back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a CONFIGTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup2.util import UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.testutil import hexFloatLiteralAllowed from CedarBackup2.config import ActionHook, PreActionHook, PostActionHook, CommandOverride from CedarBackup2.config import ExtendedAction, ActionDependencies, BlankBehavior from CedarBackup2.config import CollectFile, CollectDir, PurgeDir, LocalPeer, RemotePeer from CedarBackup2.config import ReferenceConfig, ExtensionsConfig, OptionsConfig, PeersConfig from CedarBackup2.config import CollectConfig, StageConfig, StoreConfig, PurgeConfig, Config from CedarBackup2.config import ByteQuantity ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "cback.conf.1", "cback.conf.2", "cback.conf.3", "cback.conf.4", "cback.conf.5", "cback.conf.6", "cback.conf.7", "cback.conf.8", "cback.conf.9", "cback.conf.10", "cback.conf.11", "cback.conf.12", "cback.conf.13", "cback.conf.14", "cback.conf.15", "cback.conf.16", "cback.conf.17", "cback.conf.18", "cback.conf.19", "cback.conf.20", "cback.conf.21", "cback.conf.22", "cback.conf.23", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestByteQuantity class ########################## class TestByteQuantity(unittest.TestCase): """Tests for the ByteQuantity class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ByteQuantity() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(0.0, quantity.bytes) def testConstructor_002a(self): """ Test constructor with all values filled in, with valid string quantity. """ quantity = ByteQuantity("6", UNIT_BYTES) self.failUnlessEqual("6", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(6.0, quantity.bytes) quantity = ByteQuantity("2684354560", UNIT_BYTES) self.failUnlessEqual("2684354560", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity("629145600", UNIT_BYTES) self.failUnlessEqual("629145600", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(629145600.0, quantity.bytes) quantity = ByteQuantity("2.5", UNIT_GBYTES) self.failUnlessEqual("2.5", quantity.quantity) self.failUnlessEqual(UNIT_GBYTES, quantity.units) self.failUnlessEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity("600", UNIT_MBYTES) self.failUnlessEqual("600", quantity.quantity) self.failUnlessEqual(UNIT_MBYTES, quantity.units) self.failUnlessEqual(629145600.0, quantity.bytes) def testConstructor_002b(self): """ Test constructor with all values filled in, with valid integer quantity. """ quantity = ByteQuantity(6, UNIT_BYTES) self.failUnlessEqual("6", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(6.0, quantity.bytes) quantity = ByteQuantity(2684354560, UNIT_BYTES) self.failUnlessEqual("2684354560", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity(629145600, UNIT_BYTES) self.failUnlessEqual("629145600", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(629145600.0, quantity.bytes) quantity = ByteQuantity(600, UNIT_MBYTES) self.failUnlessEqual("600", quantity.quantity) self.failUnlessEqual(UNIT_MBYTES, quantity.units) self.failUnlessEqual(629145600.0, quantity.bytes) def testConstructor_002c(self): """ Test constructor with all values filled in, with valid float quantity. """ quantity = ByteQuantity(6.0, UNIT_BYTES) self.failUnlessEqual("6.0", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(6.0, quantity.bytes) quantity = ByteQuantity(2684354560.0, UNIT_BYTES) self.failUnlessEqual("2684354560.0", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity(629145600.0, UNIT_BYTES) self.failUnlessEqual("629145600.0", quantity.quantity) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessEqual(629145600.0, quantity.bytes) quantity = ByteQuantity(2.5, UNIT_GBYTES) self.failUnlessEqual("2.5", quantity.quantity) self.failUnlessEqual(UNIT_GBYTES, quantity.units) self.failUnlessEqual(2684354560.0, quantity.bytes) quantity = ByteQuantity(600.0, UNIT_MBYTES) self.failUnlessEqual("600.0", quantity.quantity) self.failUnlessEqual(UNIT_MBYTES, quantity.units) self.failUnlessEqual(629145600.0, quantity.bytes) def testConstructor_003(self): """ Test assignment of quantity attribute, None value. """ quantity = ByteQuantity(quantity="1.0") self.failUnlessEqual("1.0", quantity.quantity) self.failUnlessEqual(1.0, quantity.bytes) quantity.quantity = None self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.bytes) def testConstructor_004a(self): """ Test assignment of quantity attribute, valid string values. """ quantity = ByteQuantity() quantity.units = UNIT_BYTES # so we can test the bytes attribute self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.bytes) quantity.quantity = "1.0" self.failUnlessEqual("1.0", quantity.quantity) self.failUnlessEqual(1.0, quantity.bytes) quantity.quantity = ".1" self.failUnlessEqual(".1", quantity.quantity) self.failUnlessEqual(0.1, quantity.bytes) quantity.quantity = "12" self.failUnlessEqual("12", quantity.quantity) self.failUnlessEqual(12.0, quantity.bytes) quantity.quantity = "0.5" self.failUnlessEqual("0.5", quantity.quantity) self.failUnlessEqual(0.5, quantity.bytes) quantity.quantity = "181281" self.failUnlessEqual("181281", quantity.quantity) self.failUnlessEqual(181281.0, quantity.bytes) quantity.quantity = "1E6" self.failUnlessEqual("1E6", quantity.quantity) self.failUnlessEqual(1.0e6, quantity.bytes) quantity.quantity = "0.25E2" self.failUnlessEqual("0.25E2", quantity.quantity) self.failUnlessEqual(0.25e2, quantity.bytes) if hexFloatLiteralAllowed(): # Some interpreters allow this, some don't quantity.quantity = "0xAC" self.failUnlessEqual("0xAC", quantity.quantity) self.failUnlessEqual(172.0, quantity.bytes) def testConstructor_004b(self): """ Test assignment of quantity attribute, valid integer values. """ quantity = ByteQuantity() quantity.units = UNIT_BYTES # so we can test the bytes attribute quantity.quantity = 1 self.failUnlessEqual("1", quantity.quantity) self.failUnlessEqual(1.0, quantity.bytes) quantity.quantity = 12 self.failUnlessEqual("12", quantity.quantity) self.failUnlessEqual(12.0, quantity.bytes) quantity.quantity = 181281 self.failUnlessEqual("181281", quantity.quantity) self.failUnlessEqual(181281.0, quantity.bytes) #pylint: disable=R0204 def testConstructor_004c(self): """ Test assignment of quantity attribute, valid float values. """ quantity = ByteQuantity() quantity.units = UNIT_BYTES # so we can test the bytes attribute quantity.quantity = 1.0 self.failUnlessEqual("1.0", quantity.quantity) self.failUnlessEqual(1.0, quantity.bytes) quantity.quantity = 0.1 self.failUnlessEqual("0.1", quantity.quantity) self.failUnlessEqual(0.1, quantity.bytes) quantity.quantity = "12.0" self.failUnlessEqual("12.0", quantity.quantity) self.failUnlessEqual(12.0, quantity.bytes) quantity.quantity = 0.5 self.failUnlessEqual("0.5", quantity.quantity) self.failUnlessEqual(0.5, quantity.bytes) quantity.quantity = "181281.0" self.failUnlessEqual("181281.0", quantity.quantity) self.failUnlessEqual(181281.0, quantity.bytes) quantity.quantity = 1E6 self.failUnlessEqual("1000000.0", quantity.quantity) self.failUnlessEqual(1.0e6, quantity.bytes) quantity.quantity = 0.25E2 self.failUnlessEqual("25.0", quantity.quantity) self.failUnlessEqual(0.25e2, quantity.bytes) def testConstructor_005(self): """ Test assignment of quantity attribute, invalid value (empty). """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "") self.failUnlessEqual(None, quantity.quantity) def testConstructor_006(self): """ Test assignment of quantity attribute, invalid value (not interpretable as a float). """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "blech") self.failUnlessEqual(None, quantity.quantity) def testConstructor_007(self): """ Test assignment of quantity attribute, invalid value (negative number). """ quantity = ByteQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-3") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-6.8") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-0.2") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-.1") self.failUnlessEqual(None, quantity.quantity) def testConstructor_008(self): """ Test assignment of units attribute, None value. """ quantity = ByteQuantity(units=UNIT_MBYTES) self.failUnlessEqual(UNIT_MBYTES, quantity.units) quantity.units = None self.failUnlessEqual(UNIT_BYTES, quantity.units) def testConstructor_009(self): """ Test assignment of units attribute, valid values. """ quantity = ByteQuantity() self.failUnlessEqual(UNIT_BYTES, quantity.units) quantity.units = UNIT_KBYTES self.failUnlessEqual(UNIT_KBYTES, quantity.units) quantity.units = UNIT_MBYTES self.failUnlessEqual(UNIT_MBYTES, quantity.units) quantity.units = UNIT_GBYTES self.failUnlessEqual(UNIT_GBYTES, quantity.units) quantity.units = UNIT_BYTES self.failUnlessEqual(UNIT_BYTES, quantity.units) def testConstructor_010(self): """ Test assignment of units attribute, invalid value (empty). """ quantity = ByteQuantity() self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "") self.failUnlessEqual(UNIT_BYTES, quantity.units) def testConstructor_011(self): """ Test assignment of units attribute, invalid value (not a valid unit). """ quantity = ByteQuantity() self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", 16) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", -2) self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "bytes") self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "B") self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "KB") self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "MB") self.failUnlessEqual(UNIT_BYTES, quantity.units) self.failUnlessAssignRaises(ValueError, quantity, "units", "GB") self.failUnlessEqual(UNIT_BYTES, quantity.units) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ quantity1 = ByteQuantity() quantity2 = ByteQuantity() self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ quantity1 = ByteQuantity("12", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_BYTES) self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_003(self): """ Test comparison of two differing objects, quantity differs (one None). """ quantity1 = ByteQuantity() quantity2 = ByteQuantity(quantity="12") self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_004a(self): """ Test comparison of two differing objects, quantity differs (same units). """ quantity1 = ByteQuantity("10", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_BYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_004b(self): """ Test comparison of two differing objects, quantity differs (different units). """ quantity1 = ByteQuantity("10", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_KBYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_004c(self): """ Test comparison of two differing objects, quantity differs (implied UNIT_BYTES). """ quantity1 = ByteQuantity("10") quantity2 = ByteQuantity("12", UNIT_BYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_004d(self): """ Test comparison of two differing objects, quantity differs (implied UNIT_BYTES). """ quantity1 = ByteQuantity("10", UNIT_BYTES) quantity2 = ByteQuantity("12") self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_004e(self): """ Test comparison of two differing objects, quantity differs (implied UNIT_BYTES). """ quantity1 = ByteQuantity("10") quantity2 = ByteQuantity("12") self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_005(self): """ Test comparison of two differing objects, units differs (one None). """ quantity1 = ByteQuantity() quantity2 = ByteQuantity(units=UNIT_MBYTES) self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_006(self): """ Test comparison of two differing objects, units differs. """ quantity1 = ByteQuantity("12", UNIT_BYTES) quantity2 = ByteQuantity("12", UNIT_KBYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_007a(self): """ Test comparison of byte quantity to integer bytes, equivalent """ quantity1 = 12 quantity2 = ByteQuantity(quantity="12", units=UNIT_BYTES) self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_007b(self): """ Test comparison of byte quantity to integer bytes, equivalent """ quantity1 = 629145600 quantity2 = ByteQuantity(quantity="600", units=UNIT_MBYTES) self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_007c(self): """ Test comparison of byte quantity to integer bytes, equivalent """ quantity1 = ByteQuantity(quantity="600", units=UNIT_MBYTES) quantity2 = 629145600 self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_008a(self): """ Test comparison of byte quantity to integer bytes, integer smaller """ quantity1 = 11 quantity2 = ByteQuantity(quantity="12", units=UNIT_BYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_008b(self): """ Test comparison of byte quantity to integer bytes, integer smaller """ quantity1 = 130390425 quantity2 = ByteQuantity(quantity="600", units=UNIT_MBYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_009a(self): """ Test comparison of byte quantity to integer bytes, integer larger """ quantity1 = 13 quantity2 = ByteQuantity(quantity="12", units=UNIT_BYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(not quantity1 <= quantity2) self.failUnless(quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_009b(self): """ Test comparison of byte quantity to integer bytes, integer larger """ quantity1 = ByteQuantity(quantity="600", units=UNIT_MBYTES) quantity2 = 629145610 self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_010a(self): """ Test comparison of byte quantity to float bytes, equivalent """ quantity1 = 12.0 quantity2 = ByteQuantity(quantity="12.0", units=UNIT_BYTES) self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_010b(self): """ Test comparison of byte quantity to float bytes, equivalent """ quantity1 = 629145600.0 quantity2 = ByteQuantity(quantity="600", units=UNIT_MBYTES) self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_011a(self): """ Test comparison of byte quantity to float bytes, float smaller """ quantity1 = 11.0 quantity2 = ByteQuantity(quantity="12.0", units=UNIT_BYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_011b(self): """ Test comparison of byte quantity to float bytes, float smaller """ quantity1 = 130390425.0 quantity2 = ByteQuantity(quantity="600", units=UNIT_MBYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_012a(self): """ Test comparison of byte quantity to float bytes, float larger """ quantity1 = 13.0 quantity2 = ByteQuantity(quantity="12.0", units=UNIT_BYTES) self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(not quantity1 <= quantity2) self.failUnless(quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_012b(self): """ Test comparison of byte quantity to float bytes, float larger """ quantity1 = ByteQuantity(quantity="600", units=UNIT_MBYTES) quantity2 = 629145610.0 self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) ############################### # TestActionDependencies class ############################### class TestActionDependencies(unittest.TestCase): """Tests for the ActionDependencies class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ActionDependencies() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessEqual(None, dependencies.afterList) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ dependencies = ActionDependencies(["b", ], ["a", ]) self.failUnlessEqual(["b", ], dependencies.beforeList) self.failUnlessEqual(["a", ], dependencies.afterList) def testConstructor_003(self): """ Test assignment of beforeList attribute, None value. """ dependencies = ActionDependencies(beforeList=[]) self.failUnlessEqual([], dependencies.beforeList) dependencies.beforeList = None self.failUnlessEqual(None, dependencies.beforeList) def testConstructor_004(self): """ Test assignment of beforeList attribute, empty list. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) dependencies.beforeList = [] self.failUnlessEqual([], dependencies.beforeList) def testConstructor_005(self): """ Test assignment of beforeList attribute, non-empty list, valid values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) dependencies.beforeList = ['a', 'b', ] self.failUnlessEqual(['a', 'b'], dependencies.beforeList) def testConstructor_006(self): """ Test assignment of beforeList attribute, non-empty list, invalid value. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["KEN", ]) self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["hello, world" ]) self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["dash-word", ]) self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["", ]) self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", [None, ]) self.failUnlessEqual(None, dependencies.beforeList) def testConstructor_007(self): """ Test assignment of beforeList attribute, non-empty list, mixed values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.beforeList) self.failUnlessAssignRaises(ValueError, dependencies, "beforeList", ["ken", "dash-word", ]) def testConstructor_008(self): """ Test assignment of afterList attribute, None value. """ dependencies = ActionDependencies(afterList=[]) self.failUnlessEqual([], dependencies.afterList) dependencies.afterList = None self.failUnlessEqual(None, dependencies.afterList) def testConstructor_009(self): """ Test assignment of afterList attribute, non-empty list, valid values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.afterList) dependencies.afterList = ['a', 'b', ] self.failUnlessEqual(['a', 'b'], dependencies.afterList) def testConstructor_010(self): """ Test assignment of afterList attribute, non-empty list, invalid values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.afterList) def testConstructor_011(self): """ Test assignment of afterList attribute, non-empty list, mixed values. """ dependencies = ActionDependencies() self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["KEN", ]) self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["hello, world" ]) self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["dash-word", ]) self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", ["", ]) self.failUnlessEqual(None, dependencies.afterList) self.failUnlessAssignRaises(ValueError, dependencies, "afterList", [None, ]) self.failUnlessEqual(None, dependencies.afterList) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ dependencies1 = ActionDependencies() dependencies2 = ActionDependencies() self.failUnlessEqual(dependencies1, dependencies2) self.failUnless(dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(not dependencies1 != dependencies2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) self.failUnlessEqual(dependencies1, dependencies2) self.failUnless(dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(not dependencies1 != dependencies2) def testComparison_003(self): """ Test comparison of two differing objects, beforeList differs (one None). """ dependencies1 = ActionDependencies(beforeList=None, afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) self.failUnless(not dependencies1 == dependencies2) self.failUnless(dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(not dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_004(self): """ Test comparison of two differing objects, beforeList differs (one empty). """ dependencies1 = ActionDependencies(beforeList=[], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) self.failUnless(not dependencies1 == dependencies2) self.failUnless(dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(not dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_005(self): """ Test comparison of two differing objects, beforeList differs. """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["b", ], afterList=["b", ]) self.failUnless(not dependencies1 == dependencies2) self.failUnless(dependencies1 < dependencies2) self.failUnless(dependencies1 <= dependencies2) self.failUnless(not dependencies1 > dependencies2) self.failUnless(not dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_006(self): """ Test comparison of two differing objects, afterList differs (one None). """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=None) self.failIfEqual(dependencies1, dependencies2) self.failUnless(not dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(not dependencies1 <= dependencies2) self.failUnless(dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_007(self): """ Test comparison of two differing objects, afterList differs (one empty). """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=[]) self.failIfEqual(dependencies1, dependencies2) self.failUnless(not dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(not dependencies1 <= dependencies2) self.failUnless(dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) def testComparison_008(self): """ Test comparison of two differing objects, afterList differs. """ dependencies1 = ActionDependencies(beforeList=["a", ], afterList=["b", ]) dependencies2 = ActionDependencies(beforeList=["a", ], afterList=["a", ]) self.failIfEqual(dependencies1, dependencies2) self.failUnless(not dependencies1 == dependencies2) self.failUnless(not dependencies1 < dependencies2) self.failUnless(not dependencies1 <= dependencies2) self.failUnless(dependencies1 > dependencies2) self.failUnless(dependencies1 >= dependencies2) self.failUnless(dependencies1 != dependencies2) ####################### # TestActionHook class ####################### class TestActionHook(unittest.TestCase): """Tests for the ActionHook class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ActionHook() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ hook = ActionHook() self.failUnlessEqual(False, hook._before) self.failUnlessEqual(False, hook._after) self.failUnlessEqual(None, hook.action) self.failUnlessEqual(None, hook.command) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ hook = ActionHook(action="action", command="command") self.failUnlessEqual(False, hook._before) self.failUnlessEqual(False, hook._after) self.failUnlessEqual("action", hook.action) self.failUnlessEqual("command", hook.command) def testConstructor_003(self): """ Test assignment of action attribute, None value. """ hook = ActionHook(action="action") self.failUnlessEqual("action", hook.action) hook.action = None self.failUnlessEqual(None, hook.action) def testConstructor_004(self): """ Test assignment of action attribute, valid value. """ hook = ActionHook() self.failUnlessEqual(None, hook.action) hook.action = "action" self.failUnlessEqual("action", hook.action) def testConstructor_005(self): """ Test assignment of action attribute, invalid value. """ hook = ActionHook() self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "KEN") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "dash-word") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "hello, world") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "") self.failUnlessEqual(None, hook.action) def testConstructor_006(self): """ Test assignment of command attribute, None value. """ hook = ActionHook(command="command") self.failUnlessEqual("command", hook.command) hook.command = None self.failUnlessEqual(None, hook.command) def testConstructor_007(self): """ Test assignment of command attribute, valid valid. """ hook = ActionHook() self.failUnlessEqual(None, hook.command) hook.command = "command" self.failUnlessEqual("command", hook.command) def testConstructor_008(self): """ Test assignment of command attribute, invalid valid. """ hook = ActionHook() self.failUnlessEqual(None, hook.command) self.failUnlessAssignRaises(ValueError, hook, "command", "") self.failUnlessEqual(None, hook.command) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ hook1 = ActionHook() hook2 = ActionHook() self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ hook1 = ActionHook(action="action", command="command") hook2 = ActionHook(action="action", command="command") self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_003(self): """ Test comparison of two different objects, action differs (one None). """ hook1 = ActionHook(action="action", command="command") hook2 = ActionHook(action=None, command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_004(self): """ Test comparison of two different objects, action differs. """ hook1 = ActionHook(action="action2", command="command") hook2 = ActionHook(action="action1", command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_005(self): """ Test comparison of two different objects, command differs (one None). """ hook1 = ActionHook(action="action", command=None) hook2 = ActionHook(action="action", command="command") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_006(self): """ Test comparison of two different objects, command differs. """ hook1 = ActionHook(action="action", command="command1") hook2 = ActionHook(action="action", command="command2") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) ########################## # TestPreActionHook class ########################## class TestPreActionHook(unittest.TestCase): """Tests for the PreActionHook class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PreActionHook() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ hook = PreActionHook() self.failUnlessEqual(True, hook._before) self.failUnlessEqual(False, hook._after) self.failUnlessEqual(None, hook.action) self.failUnlessEqual(None, hook.command) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ hook = PreActionHook(action="action", command="command") self.failUnlessEqual(True, hook._before) self.failUnlessEqual(False, hook._after) self.failUnlessEqual("action", hook.action) self.failUnlessEqual("command", hook.command) def testConstructor_003(self): """ Test assignment of action attribute, None value. """ hook = PreActionHook(action="action") self.failUnlessEqual("action", hook.action) hook.action = None self.failUnlessEqual(None, hook.action) def testConstructor_004(self): """ Test assignment of action attribute, valid value. """ hook = PreActionHook() self.failUnlessEqual(None, hook.action) hook.action = "action" self.failUnlessEqual("action", hook.action) def testConstructor_005(self): """ Test assignment of action attribute, invalid value. """ hook = PreActionHook() self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "KEN") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "dash-word") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "hello, world") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "") self.failUnlessEqual(None, hook.action) def testConstructor_006(self): """ Test assignment of command attribute, None value. """ hook = PreActionHook(command="command") self.failUnlessEqual("command", hook.command) hook.command = None self.failUnlessEqual(None, hook.command) def testConstructor_007(self): """ Test assignment of command attribute, valid valid. """ hook = PreActionHook() self.failUnlessEqual(None, hook.command) hook.command = "command" self.failUnlessEqual("command", hook.command) def testConstructor_008(self): """ Test assignment of command attribute, invalid valid. """ hook = PreActionHook() self.failUnlessEqual(None, hook.command) self.failUnlessAssignRaises(ValueError, hook, "command", "") self.failUnlessEqual(None, hook.command) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ hook1 = PreActionHook() hook2 = PreActionHook() self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ hook1 = PreActionHook(action="action", command="command") hook2 = PreActionHook(action="action", command="command") self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_003(self): """ Test comparison of two different objects, action differs (one None). """ hook1 = PreActionHook(action="action", command="command") hook2 = PreActionHook(action=None, command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_004(self): """ Test comparison of two different objects, action differs. """ hook1 = PreActionHook(action="action2", command="command") hook2 = PreActionHook(action="action1", command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_005(self): """ Test comparison of two different objects, command differs (one None). """ hook1 = PreActionHook(action="action", command=None) hook2 = PreActionHook(action="action", command="command") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_006(self): """ Test comparison of two different objects, command differs. """ hook1 = PreActionHook(action="action", command="command1") hook2 = PreActionHook(action="action", command="command2") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) ########################### # TestPostActionHook class ########################### class TestPostActionHook(unittest.TestCase): """Tests for the PostActionHook class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PostActionHook() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ hook = PostActionHook() self.failUnlessEqual(False, hook._before) self.failUnlessEqual(True, hook._after) self.failUnlessEqual(None, hook.action) self.failUnlessEqual(None, hook.command) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ hook = PostActionHook(action="action", command="command") self.failUnlessEqual(False, hook._before) self.failUnlessEqual(True, hook._after) self.failUnlessEqual("action", hook.action) self.failUnlessEqual("command", hook.command) def testConstructor_003(self): """ Test assignment of action attribute, None value. """ hook = PostActionHook(action="action") self.failUnlessEqual("action", hook.action) hook.action = None self.failUnlessEqual(None, hook.action) def testConstructor_004(self): """ Test assignment of action attribute, valid value. """ hook = PostActionHook() self.failUnlessEqual(None, hook.action) hook.action = "action" self.failUnlessEqual("action", hook.action) def testConstructor_005(self): """ Test assignment of action attribute, invalid value. """ hook = PostActionHook() self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "KEN") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "dash-word") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "hello, world") self.failUnlessEqual(None, hook.action) self.failUnlessAssignRaises(ValueError, hook, "action", "") self.failUnlessEqual(None, hook.action) def testConstructor_006(self): """ Test assignment of command attribute, None value. """ hook = PostActionHook(command="command") self.failUnlessEqual("command", hook.command) hook.command = None self.failUnlessEqual(None, hook.command) def testConstructor_007(self): """ Test assignment of command attribute, valid valid. """ hook = PostActionHook() self.failUnlessEqual(None, hook.command) hook.command = "command" self.failUnlessEqual("command", hook.command) def testConstructor_008(self): """ Test assignment of command attribute, invalid valid. """ hook = PostActionHook() self.failUnlessEqual(None, hook.command) self.failUnlessAssignRaises(ValueError, hook, "command", "") self.failUnlessEqual(None, hook.command) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ hook1 = PostActionHook() hook2 = PostActionHook() self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ hook1 = PostActionHook(action="action", command="command") hook2 = PostActionHook(action="action", command="command") self.failUnlessEqual(hook1, hook2) self.failUnless(hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(not hook1 != hook2) def testComparison_003(self): """ Test comparison of two different objects, action differs (one None). """ hook1 = PostActionHook(action="action", command="command") hook2 = PostActionHook(action=None, command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_004(self): """ Test comparison of two different objects, action differs. """ hook1 = PostActionHook(action="action2", command="command") hook2 = PostActionHook(action="action1", command="command") self.failUnless(not hook1 == hook2) self.failUnless(not hook1 < hook2) self.failUnless(not hook1 <= hook2) self.failUnless(hook1 > hook2) self.failUnless(hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_005(self): """ Test comparison of two different objects, command differs (one None). """ hook1 = PostActionHook(action="action", command=None) hook2 = PostActionHook(action="action", command="command") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) def testComparison_006(self): """ Test comparison of two different objects, command differs. """ hook1 = PostActionHook(action="action", command="command1") hook2 = PostActionHook(action="action", command="command2") self.failUnless(not hook1 == hook2) self.failUnless(hook1 < hook2) self.failUnless(hook1 <= hook2) self.failUnless(not hook1 > hook2) self.failUnless(not hook1 >= hook2) self.failUnless(hook1 != hook2) ########################## # TestBlankBehavior class ########################## class TestBlankBehavior(unittest.TestCase): """Tests for the BlankBehavior class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = BlankBehavior() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankMode) self.failUnlessEqual(None, behavior.blankFactor) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ behavior = BlankBehavior(blankMode="daily", blankFactor="1.0") self.failUnlessEqual("daily", behavior.blankMode) self.failUnlessEqual("1.0", behavior.blankFactor) def testConstructor_003(self): """ Test assignment of blankMode, None value. """ behavior = BlankBehavior(blankMode="daily") self.failUnlessEqual("daily", behavior.blankMode) behavior.blankMode = None self.failUnlessEqual(None, behavior.blankMode) def testConstructor_004(self): """ Test assignment of blankMode attribute, valid value. """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankMode) behavior.blankMode = "daily" self.failUnlessEqual("daily", behavior.blankMode) behavior.blankMode = "weekly" self.failUnlessEqual("weekly", behavior.blankMode) def testConstructor_005(self): """ Test assignment of blankFactor attribute, None value. """ behavior = BlankBehavior(blankFactor="1.3") self.failUnlessEqual("1.3", behavior.blankFactor) behavior.blankFactor = None self.failUnlessEqual(None, behavior.blankFactor) def testConstructor_006(self): """ Test assignment of blankFactor attribute, valid values. """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankFactor) behavior.blankFactor = "1.0" self.failUnlessEqual("1.0", behavior.blankFactor) behavior.blankFactor = ".1" self.failUnlessEqual(".1", behavior.blankFactor) behavior.blankFactor = "12" self.failUnlessEqual("12", behavior.blankFactor) behavior.blankFactor = "0.5" self.failUnlessEqual("0.5", behavior.blankFactor) behavior.blankFactor = "181281" self.failUnlessEqual("181281", behavior.blankFactor) behavior.blankFactor = "1E6" self.failUnlessEqual("1E6", behavior.blankFactor) behavior.blankFactor = "0.25E2" self.failUnlessEqual("0.25E2", behavior.blankFactor) if hexFloatLiteralAllowed(): # Some interpreters allow this, some don't behavior.blankFactor = "0xAC" self.failUnlessEqual("0xAC", behavior.blankFactor) def testConstructor_007(self): """ Test assignment of blankFactor attribute, invalid value (empty). """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "") self.failUnlessEqual(None, behavior.blankFactor) def testConstructor_008(self): """ Test assignment of blankFactor attribute, invalid value (not a floating point number). """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "blech") self.failUnlessEqual(None, behavior.blankFactor) def testConstructor_009(self): """ Test assignment of blankFactor store attribute, invalid value (negative number). """ behavior = BlankBehavior() self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-3") self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-6.8") self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-0.2") self.failUnlessEqual(None, behavior.blankFactor) self.failUnlessAssignRaises(ValueError, behavior, "blankFactor", "-.1") self.failUnlessEqual(None, behavior.blankFactor) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ behavior1 = BlankBehavior() behavior2 = BlankBehavior() self.failUnlessEqual(behavior1, behavior2) self.failUnless(behavior1 == behavior2) self.failUnless(not behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(behavior1 >= behavior2) self.failUnless(not behavior1 != behavior2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ behavior1 = BlankBehavior(blankMode="weekly", blankFactor="1.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnlessEqual(behavior1, behavior2) self.failUnless(behavior1 == behavior2) self.failUnless(not behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(behavior1 >= behavior2) self.failUnless(not behavior1 != behavior2) def testComparison_003(self): """ Test comparison of two different objects, blankMode differs (one None). """ behavior1 = BlankBehavior(None, blankFactor="1.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnless(not behavior1 == behavior2) self.failUnless(behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(not behavior1 >= behavior2) self.failUnless(behavior1 != behavior2) def testComparison_004(self): """ Test comparison of two different objects, blankMode differs. """ behavior1 = BlankBehavior(blankMode="daily", blankFactor="1.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnless(not behavior1 == behavior2) self.failUnless(behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(not behavior1 >= behavior2) self.failUnless(behavior1 != behavior2) def testComparison_005(self): """ Test comparison of two different objects, blankFactor differs (one None). """ behavior1 = BlankBehavior(blankMode="weekly", blankFactor=None) behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnless(not behavior1 == behavior2) self.failUnless(behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(not behavior1 >= behavior2) self.failUnless(behavior1 != behavior2) def testComparison_006(self): """ Test comparison of two different objects, blankFactor differs. """ behavior1 = BlankBehavior(blankMode="weekly", blankFactor="0.0") behavior2 = BlankBehavior(blankMode="weekly", blankFactor="1.0") self.failUnless(not behavior1 == behavior2) self.failUnless(behavior1 < behavior2) self.failUnless(behavior1 <= behavior2) self.failUnless(not behavior1 > behavior2) self.failUnless(not behavior1 >= behavior2) self.failUnless(behavior1 != behavior2) ########################### # TestExtendedAction class ########################### class TestExtendedAction(unittest.TestCase): """Tests for the ExtendedAction class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ExtendedAction() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ action = ExtendedAction() self.failUnlessEqual(None, action.name) self.failUnlessEqual(None, action.module) self.failUnlessEqual(None, action.function) self.failUnlessEqual(None, action.index) self.failUnlessEqual(None, action.dependencies) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ action = ExtendedAction("one", "two", "three", 4, ActionDependencies()) self.failUnlessEqual("one", action.name) self.failUnlessEqual("two", action.module) self.failUnlessEqual("three", action.function) self.failUnlessEqual(4, action.index) self.failUnlessEqual(ActionDependencies(), action.dependencies) def testConstructor_003(self): """ Test assignment of name attribute, None value. """ action = ExtendedAction(name="name") self.failUnlessEqual("name", action.name) action.name = None self.failUnlessEqual(None, action.name) def testConstructor_004(self): """ Test assignment of name attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.name) action.name = "name" self.failUnlessEqual("name", action.name) action.name = "9" self.failUnlessEqual("9", action.name) action.name = "name99name" self.failUnlessEqual("name99name", action.name) action.name = "12action" self.failUnlessEqual("12action", action.name) def testConstructor_005(self): """ Test assignment of name attribute, invalid value (empty). """ action = ExtendedAction() self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "") self.failUnlessEqual(None, action.name) def testConstructor_006(self): """ Test assignment of name attribute, invalid value (does not match valid pattern). """ action = ExtendedAction() self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "Something") self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "what_ever") self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "_BOGUS") self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "stuff-here") self.failUnlessEqual(None, action.name) self.failUnlessAssignRaises(ValueError, action, "name", "/more/stuff") self.failUnlessEqual(None, action.name) def testConstructor_007(self): """ Test assignment of module attribute, None value. """ action = ExtendedAction(module="module") self.failUnlessEqual("module", action.module) action.module = None self.failUnlessEqual(None, action.module) def testConstructor_008(self): """ Test assignment of module attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.module) action.module = "module" self.failUnlessEqual("module", action.module) action.module = "stuff" self.failUnlessEqual("stuff", action.module) action.module = "stuff.something" self.failUnlessEqual("stuff.something", action.module) action.module = "_identifier.__another.one_more__" self.failUnlessEqual("_identifier.__another.one_more__", action.module) def testConstructor_009(self): """ Test assignment of module attribute, invalid value (empty). """ action = ExtendedAction() self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "") self.failUnlessEqual(None, action.module) def testConstructor_010(self): """ Test assignment of module attribute, invalid value (does not match valid pattern). """ action = ExtendedAction() self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "9something") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "_bogus.") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "-bogus") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "/BOGUS") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", "really._really__.___really.long.bad.path.") self.failUnlessEqual(None, action.module) self.failUnlessAssignRaises(ValueError, action, "module", ".really._really__.___really.long.bad.path") self.failUnlessEqual(None, action.module) def testConstructor_011(self): """ Test assignment of function attribute, None value. """ action = ExtendedAction(function="function") self.failUnlessEqual("function", action.function) action.function = None self.failUnlessEqual(None, action.function) def testConstructor_012(self): """ Test assignment of function attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.function) action.function = "function" self.failUnlessEqual("function", action.function) action.function = "_stuff" self.failUnlessEqual("_stuff", action.function) action.function = "moreStuff9" self.failUnlessEqual("moreStuff9", action.function) action.function = "__identifier__" self.failUnlessEqual("__identifier__", action.function) def testConstructor_013(self): """ Test assignment of function attribute, invalid value (empty). """ action = ExtendedAction() self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "") self.failUnlessEqual(None, action.function) def testConstructor_014(self): """ Test assignment of function attribute, invalid value (does not match valid pattern). """ action = ExtendedAction() self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "9something") self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "one.two") self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "-bogus") self.failUnlessEqual(None, action.function) self.failUnlessAssignRaises(ValueError, action, "function", "/BOGUS") self.failUnlessEqual(None, action.function) def testConstructor_015(self): """ Test assignment of index attribute, None value. """ action = ExtendedAction(index=1) self.failUnlessEqual(1, action.index) action.index = None self.failUnlessEqual(None, action.index) def testConstructor_016(self): """ Test assignment of index attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.index) action.index = 1 self.failUnlessEqual(1, action.index) def testConstructor_017(self): """ Test assignment of index attribute, invalid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.index) self.failUnlessAssignRaises(ValueError, action, "index", "ken") self.failUnlessEqual(None, action.index) def testConstructor_018(self): """ Test assignment of dependencies attribute, None value. """ action = ExtendedAction(dependencies=ActionDependencies()) self.failUnlessEqual(ActionDependencies(), action.dependencies) action.dependencies = None self.failUnlessEqual(None, action.dependencies) def testConstructor_019(self): """ Test assignment of dependencies attribute, valid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.dependencies) action.dependencies = ActionDependencies() self.failUnlessEqual(ActionDependencies(), action.dependencies) def testConstructor_020(self): """ Test assignment of dependencies attribute, invalid value. """ action = ExtendedAction() self.failUnlessEqual(None, action.dependencies) self.failUnlessAssignRaises(ValueError, action, "dependencies", "ken") self.failUnlessEqual(None, action.dependencies) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ action1 = ExtendedAction() action2 = ExtendedAction() self.failUnlessEqual(action1, action2) self.failUnless(action1 == action2) self.failUnless(not action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(action1 >= action2) self.failUnless(not action1 != action2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ action1 = ExtendedAction("one", "two", "three", 4, ActionDependencies()) action2 = ExtendedAction("one", "two", "three", 4, ActionDependencies()) self.failUnless(action1 == action2) self.failUnless(not action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(action1 >= action2) self.failUnless(not action1 != action2) def testComparison_003(self): """ Test comparison of two differing objects, name differs (one None). """ action1 = ExtendedAction(name="name") action2 = ExtendedAction() self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_004(self): """ Test comparison of two differing objects, name differs. """ action1 = ExtendedAction("name2", "two", "three", 4) action2 = ExtendedAction("name1", "two", "three", 4) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_005(self): """ Test comparison of two differing objects, module differs (one None). """ action1 = ExtendedAction(module="whatever") action2 = ExtendedAction() self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_006(self): """ Test comparison of two differing objects, module differs. """ action1 = ExtendedAction("one", "MODULE", "three", 4) action2 = ExtendedAction("one", "two", "three", 4) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) def testComparison_007(self): """ Test comparison of two differing objects, function differs (one None). """ action1 = ExtendedAction(function="func1") action2 = ExtendedAction() self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_008(self): """ Test comparison of two differing objects, function differs. """ action1 = ExtendedAction("one", "two", "func1", 4) action2 = ExtendedAction("one", "two", "func2", 4) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) def testComparison_009(self): """ Test comparison of two differing objects, index differs (one None). """ action1 = ExtendedAction() action2 = ExtendedAction(index=42) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) def testComparison_010(self): """ Test comparison of two differing objects, index differs. """ action1 = ExtendedAction("one", "two", "three", 99) action2 = ExtendedAction("one", "two", "three", 12) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(not action1 < action2) self.failUnless(not action1 <= action2) self.failUnless(action1 > action2) self.failUnless(action1 >= action2) self.failUnless(action1 != action2) def testComparison_011(self): """ Test comparison of two differing objects, dependencies differs (one None). """ action1 = ExtendedAction() action2 = ExtendedAction(dependencies=ActionDependencies()) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) def testComparison_012(self): """ Test comparison of two differing objects, dependencies differs. """ action1 = ExtendedAction("one", "two", "three", 99, ActionDependencies(beforeList=[])) action2 = ExtendedAction("one", "two", "three", 99, ActionDependencies(beforeList=["ken", ])) self.failIfEqual(action1, action2) self.failUnless(not action1 == action2) self.failUnless(action1 < action2) self.failUnless(action1 <= action2) self.failUnless(not action1 > action2) self.failUnless(not action1 >= action2) self.failUnless(action1 != action2) ############################ # TestCommandOverride class ############################ class TestCommandOverride(unittest.TestCase): """Tests for the CommandOverride class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CommandOverride() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ override = CommandOverride() self.failUnlessEqual(None, override.command) self.failUnlessEqual(None, override.absolutePath) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ override = CommandOverride(command="command", absolutePath="/path/to/something") self.failUnlessEqual("command", override.command) self.failUnlessEqual("/path/to/something", override.absolutePath) def testConstructor_003(self): """ Test assignment of command attribute, None value. """ override = CommandOverride(command="command") self.failUnlessEqual("command", override.command) override.command = None self.failUnlessEqual(None, override.command) def testConstructor_004(self): """ Test assignment of command attribute, valid value. """ override = CommandOverride() self.failUnlessEqual(None, override.command) override.command = "command" self.failUnlessEqual("command", override.command) def testConstructor_005(self): """ Test assignment of command attribute, invalid value. """ override = CommandOverride() override.command = None self.failUnlessAssignRaises(ValueError, override, "command", "") override.command = None def testConstructor_006(self): """ Test assignment of absolutePath attribute, None value. """ override = CommandOverride(absolutePath="/path/to/something") self.failUnlessEqual("/path/to/something", override.absolutePath) override.absolutePath = None self.failUnlessEqual(None, override.absolutePath) def testConstructor_007(self): """ Test assignment of absolutePath attribute, valid value. """ override = CommandOverride() self.failUnlessEqual(None, override.absolutePath) override.absolutePath = "/path/to/something" self.failUnlessEqual("/path/to/something", override.absolutePath) def testConstructor_008(self): """ Test assignment of absolutePath attribute, invalid value. """ override = CommandOverride() override.command = None self.failUnlessAssignRaises(ValueError, override, "absolutePath", "path/to/something/relative") override.command = None self.failUnlessAssignRaises(ValueError, override, "absolutePath", "") override.command = None ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ override1 = CommandOverride() override2 = CommandOverride() self.failUnlessEqual(override1, override2) self.failUnless(override1 == override2) self.failUnless(not override1 < override2) self.failUnless(override1 <= override2) self.failUnless(not override1 > override2) self.failUnless(override1 >= override2) self.failUnless(not override1 != override2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ override1 = CommandOverride(command="command", absolutePath="/path/to/something") override2 = CommandOverride(command="command", absolutePath="/path/to/something") self.failUnlessEqual(override1, override2) self.failUnless(override1 == override2) self.failUnless(not override1 < override2) self.failUnless(override1 <= override2) self.failUnless(not override1 > override2) self.failUnless(override1 >= override2) self.failUnless(not override1 != override2) def testComparison_003(self): """ Test comparison of differing objects, command differs (one None). """ override1 = CommandOverride(command=None, absolutePath="/path/to/something") override2 = CommandOverride(command="command", absolutePath="/path/to/something") self.failUnless(not override1 == override2) self.failUnless(override1 < override2) self.failUnless(override1 <= override2) self.failUnless(not override1 > override2) self.failUnless(not override1 >= override2) self.failUnless(override1 != override2) def testComparison_004(self): """ Test comparison of differing objects, command differs. """ override1 = CommandOverride(command="command2", absolutePath="/path/to/something") override2 = CommandOverride(command="command1", absolutePath="/path/to/something") self.failUnless(not override1 == override2) self.failUnless(not override1 < override2) self.failUnless(not override1 <= override2) self.failUnless(override1 > override2) self.failUnless(override1 >= override2) self.failUnless(override1 != override2) def testComparison_005(self): """ Test comparison of differing objects, absolutePath differs (one None). """ override1 = CommandOverride(command="command", absolutePath="/path/to/something") override2 = CommandOverride(command="command", absolutePath=None) self.failUnless(not override1 == override2) self.failUnless(not override1 < override2) self.failUnless(not override1 <= override2) self.failUnless(override1 > override2) self.failUnless(override1 >= override2) self.failUnless(override1 != override2) def testComparison_006(self): """ Test comparison of differing objects, absolutePath differs. """ override1 = CommandOverride(command="command", absolutePath="/path/to/something1") override2 = CommandOverride(command="command", absolutePath="/path/to/something2") self.failUnless(not override1 == override2) self.failUnless(override1 < override2) self.failUnless(override1 <= override2) self.failUnless(not override1 > override2) self.failUnless(not override1 >= override2) self.failUnless(override1 != override2) ######################## # TestCollectFile class ######################## class TestCollectFile(unittest.TestCase): """Tests for the CollectFile class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CollectFile() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.absolutePath) self.failUnlessEqual(None, collectFile.collectMode) self.failUnlessEqual(None, collectFile.archiveMode) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ collectFile = CollectFile("/etc/whatever", "incr", "tar") self.failUnlessEqual("/etc/whatever", collectFile.absolutePath) self.failUnlessEqual("incr", collectFile.collectMode) self.failUnlessEqual("tar", collectFile.archiveMode) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ collectFile = CollectFile(absolutePath="/whatever") self.failUnlessEqual("/whatever", collectFile.absolutePath) collectFile.absolutePath = None self.failUnlessEqual(None, collectFile.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.absolutePath) collectFile.absolutePath = "/etc/whatever" self.failUnlessEqual("/etc/whatever", collectFile.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.absolutePath) self.failUnlessAssignRaises(ValueError, collectFile, "absolutePath", "") self.failUnlessEqual(None, collectFile.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (non-absolute). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.absolutePath) self.failUnlessAssignRaises(ValueError, collectFile, "absolutePath", "whatever") self.failUnlessEqual(None, collectFile.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ collectFile = CollectFile(collectMode="incr") self.failUnlessEqual("incr", collectFile.collectMode) collectFile.collectMode = None self.failUnlessEqual(None, collectFile.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.collectMode) collectFile.collectMode = "daily" self.failUnlessEqual("daily", collectFile.collectMode) collectFile.collectMode = "weekly" self.failUnlessEqual("weekly", collectFile.collectMode) collectFile.collectMode = "incr" self.failUnlessEqual("incr", collectFile.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.collectMode) self.failUnlessAssignRaises(ValueError, collectFile, "collectMode", "") self.failUnlessEqual(None, collectFile.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.collectMode) self.failUnlessAssignRaises(ValueError, collectFile, "collectMode", "bogus") self.failUnlessEqual(None, collectFile.collectMode) def testConstructor_011(self): """ Test assignment of archiveMode attribute, None value. """ collectFile = CollectFile(archiveMode="tar") self.failUnlessEqual("tar", collectFile.archiveMode) collectFile.archiveMode = None self.failUnlessEqual(None, collectFile.archiveMode) def testConstructor_012(self): """ Test assignment of archiveMode attribute, valid value. """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.archiveMode) collectFile.archiveMode = "tar" self.failUnlessEqual("tar", collectFile.archiveMode) collectFile.archiveMode = "targz" self.failUnlessEqual("targz", collectFile.archiveMode) collectFile.archiveMode = "tarbz2" self.failUnlessEqual("tarbz2", collectFile.archiveMode) def testConstructor_013(self): """ Test assignment of archiveMode attribute, invalid value (empty). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.archiveMode) self.failUnlessAssignRaises(ValueError, collectFile, "archiveMode", "") self.failUnlessEqual(None, collectFile.archiveMode) def testConstructor_014(self): """ Test assignment of archiveMode attribute, invalid value (not in list). """ collectFile = CollectFile() self.failUnlessEqual(None, collectFile.archiveMode) self.failUnlessAssignRaises(ValueError, collectFile, "archiveMode", "bogus") self.failUnlessEqual(None, collectFile.archiveMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ collectFile1 = CollectFile() collectFile2 = CollectFile() self.failUnlessEqual(collectFile1, collectFile2) self.failUnless(collectFile1 == collectFile2) self.failUnless(not collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(collectFile1 >= collectFile2) self.failUnless(not collectFile1 != collectFile2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ collectFile1 = CollectFile("/etc/whatever", "incr", "tar") collectFile2 = CollectFile("/etc/whatever", "incr", "tar") self.failUnless(collectFile1 == collectFile2) self.failUnless(not collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(collectFile1 >= collectFile2) self.failUnless(not collectFile1 != collectFile2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ collectFile1 = CollectFile() collectFile2 = CollectFile(absolutePath="/whatever") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(not collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ collectFile1 = CollectFile("/etc/whatever", "incr", "tar") collectFile2 = CollectFile("/stuff", "incr", "tar") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(not collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ collectFile1 = CollectFile() collectFile2 = CollectFile(collectMode="incr") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(not collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ collectFile1 = CollectFile("/etc/whatever", "incr", "tar") collectFile2 = CollectFile("/etc/whatever", "daily", "tar") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(not collectFile1 < collectFile2) self.failUnless(not collectFile1 <= collectFile2) self.failUnless(collectFile1 > collectFile2) self.failUnless(collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_007(self): """ Test comparison of two differing objects, archiveMode differs (one None). """ collectFile1 = CollectFile() collectFile2 = CollectFile(archiveMode="tar") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(collectFile1 < collectFile2) self.failUnless(collectFile1 <= collectFile2) self.failUnless(not collectFile1 > collectFile2) self.failUnless(not collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) def testComparison_008(self): """ Test comparison of two differing objects, archiveMode differs. """ collectFile1 = CollectFile("/etc/whatever", "incr", "targz") collectFile2 = CollectFile("/etc/whatever", "incr", "tar") self.failIfEqual(collectFile1, collectFile2) self.failUnless(not collectFile1 == collectFile2) self.failUnless(not collectFile1 < collectFile2) self.failUnless(not collectFile1 <= collectFile2) self.failUnless(collectFile1 > collectFile2) self.failUnless(collectFile1 >= collectFile2) self.failUnless(collectFile1 != collectFile2) ####################### # TestCollectDir class ####################### class TestCollectDir(unittest.TestCase): """Tests for the CollectDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CollectDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absolutePath) self.failUnlessEqual(None, collectDir.collectMode) self.failUnlessEqual(None, collectDir.archiveMode) self.failUnlessEqual(None, collectDir.ignoreFile) self.failUnlessEqual(None, collectDir.linkDepth) self.failUnlessEqual(False, collectDir.dereference) self.failUnlessEqual(None, collectDir.recursionLevel) self.failUnlessEqual(None, collectDir.absoluteExcludePaths) self.failUnlessEqual(None, collectDir.relativeExcludePaths) self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ collectDir = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 2, True, 6) self.failUnlessEqual("/etc/whatever", collectDir.absolutePath) self.failUnlessEqual("incr", collectDir.collectMode) self.failUnlessEqual("tar", collectDir.archiveMode) self.failUnlessEqual(".ignore", collectDir.ignoreFile) self.failUnlessEqual(2, collectDir.linkDepth) self.failUnlessEqual(True, collectDir.dereference) self.failUnlessEqual(6, collectDir.recursionLevel) self.failUnlessEqual([], collectDir.absoluteExcludePaths) self.failUnlessEqual([], collectDir.relativeExcludePaths) self.failUnlessEqual([], collectDir.excludePatterns) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ collectDir = CollectDir(absolutePath="/whatever") self.failUnlessEqual("/whatever", collectDir.absolutePath) collectDir.absolutePath = None self.failUnlessEqual(None, collectDir.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absolutePath) collectDir.absolutePath = "/etc/whatever" self.failUnlessEqual("/etc/whatever", collectDir.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absolutePath) self.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", "") self.failUnlessEqual(None, collectDir.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (non-absolute). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absolutePath) self.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", "whatever") self.failUnlessEqual(None, collectDir.absolutePath) def testConstructor_007(self): """ Test assignment of collectMode attribute, None value. """ collectDir = CollectDir(collectMode="incr") self.failUnlessEqual("incr", collectDir.collectMode) collectDir.collectMode = None self.failUnlessEqual(None, collectDir.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.collectMode) collectDir.collectMode = "daily" self.failUnlessEqual("daily", collectDir.collectMode) collectDir.collectMode = "weekly" self.failUnlessEqual("weekly", collectDir.collectMode) collectDir.collectMode = "incr" self.failUnlessEqual("incr", collectDir.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, invalid value (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.collectMode) self.failUnlessAssignRaises(ValueError, collectDir, "collectMode", "") self.failUnlessEqual(None, collectDir.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.collectMode) self.failUnlessAssignRaises(ValueError, collectDir, "collectMode", "bogus") self.failUnlessEqual(None, collectDir.collectMode) def testConstructor_011(self): """ Test assignment of archiveMode attribute, None value. """ collectDir = CollectDir(archiveMode="tar") self.failUnlessEqual("tar", collectDir.archiveMode) collectDir.archiveMode = None self.failUnlessEqual(None, collectDir.archiveMode) def testConstructor_012(self): """ Test assignment of archiveMode attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.archiveMode) collectDir.archiveMode = "tar" self.failUnlessEqual("tar", collectDir.archiveMode) collectDir.archiveMode = "targz" self.failUnlessEqual("targz", collectDir.archiveMode) collectDir.archiveMode = "tarbz2" self.failUnlessEqual("tarbz2", collectDir.archiveMode) def testConstructor_013(self): """ Test assignment of archiveMode attribute, invalid value (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.archiveMode) self.failUnlessAssignRaises(ValueError, collectDir, "archiveMode", "") self.failUnlessEqual(None, collectDir.archiveMode) def testConstructor_014(self): """ Test assignment of archiveMode attribute, invalid value (not in list). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.archiveMode) self.failUnlessAssignRaises(ValueError, collectDir, "archiveMode", "bogus") self.failUnlessEqual(None, collectDir.archiveMode) def testConstructor_015(self): """ Test assignment of ignoreFile attribute, None value. """ collectDir = CollectDir(ignoreFile="ignore") self.failUnlessEqual("ignore", collectDir.ignoreFile) collectDir.ignoreFile = None self.failUnlessEqual(None, collectDir.ignoreFile) def testConstructor_016(self): """ Test assignment of ignoreFile attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.ignoreFile) collectDir.ignoreFile = "ignorefile" self.failUnlessEqual("ignorefile", collectDir.ignoreFile) def testConstructor_017(self): """ Test assignment of ignoreFile attribute, invalid value (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.ignoreFile) self.failUnlessAssignRaises(ValueError, collectDir, "ignoreFile", "") self.failUnlessEqual(None, collectDir.ignoreFile) def testConstructor_018(self): """ Test assignment of absoluteExcludePaths attribute, None value. """ collectDir = CollectDir(absoluteExcludePaths=[]) self.failUnlessEqual([], collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = None self.failUnlessEqual(None, collectDir.absoluteExcludePaths) def testConstructor_019(self): """ Test assignment of absoluteExcludePaths attribute, [] value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = [] self.failUnlessEqual([], collectDir.absoluteExcludePaths) def testConstructor_020(self): """ Test assignment of absoluteExcludePaths attribute, single valid entry. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = ["/whatever", ] self.failUnlessEqual(["/whatever", ], collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths.append("/stuff") self.failUnlessEqual(["/whatever", "/stuff", ], collectDir.absoluteExcludePaths) def testConstructor_021(self): """ Test assignment of absoluteExcludePaths attribute, multiple valid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths = ["/whatever", "/stuff", ] self.failUnlessEqual(["/whatever", "/stuff", ], collectDir.absoluteExcludePaths) collectDir.absoluteExcludePaths.append("/etc/X11") self.failUnlessEqual(["/whatever", "/stuff", "/etc/X11", ], collectDir.absoluteExcludePaths) def testConstructor_022(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (empty). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collectDir, "absoluteExcludePaths", ["", ]) self.failUnlessEqual(None, collectDir.absoluteExcludePaths) def testConstructor_023(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (not absolute). """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collectDir, "absoluteExcludePaths", ["notabsolute", ]) self.failUnlessEqual(None, collectDir.absoluteExcludePaths) def testConstructor_024(self): """ Test assignment of absoluteExcludePaths attribute, mixed valid and invalid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collectDir, "absoluteExcludePaths", ["/good", "bad", "/alsogood", ]) self.failUnlessEqual(None, collectDir.absoluteExcludePaths) def testConstructor_025(self): """ Test assignment of relativeExcludePaths attribute, None value. """ collectDir = CollectDir(relativeExcludePaths=[]) self.failUnlessEqual([], collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = None self.failUnlessEqual(None, collectDir.relativeExcludePaths) def testConstructor_026(self): """ Test assignment of relativeExcludePaths attribute, [] value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = [] self.failUnlessEqual([], collectDir.relativeExcludePaths) def testConstructor_027(self): """ Test assignment of relativeExcludePaths attribute, single valid entry. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = ["stuff", ] self.failUnlessEqual(["stuff", ], collectDir.relativeExcludePaths) collectDir.relativeExcludePaths.insert(0, "bogus") self.failUnlessEqual(["bogus", "stuff", ], collectDir.relativeExcludePaths) def testConstructor_028(self): """ Test assignment of relativeExcludePaths attribute, multiple valid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.relativeExcludePaths) collectDir.relativeExcludePaths = ["bogus", "stuff", ] self.failUnlessEqual(["bogus", "stuff", ], collectDir.relativeExcludePaths) collectDir.relativeExcludePaths.append("more") self.failUnlessEqual(["bogus", "stuff", "more", ], collectDir.relativeExcludePaths) def testConstructor_029(self): """ Test assignment of excludePatterns attribute, None value. """ collectDir = CollectDir(excludePatterns=[]) self.failUnlessEqual([], collectDir.excludePatterns) collectDir.excludePatterns = None self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_030(self): """ Test assignment of excludePatterns attribute, [] value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) collectDir.excludePatterns = [] self.failUnlessEqual([], collectDir.excludePatterns) def testConstructor_031(self): """ Test assignment of excludePatterns attribute, single valid entry. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) collectDir.excludePatterns = ["valid", ] self.failUnlessEqual(["valid", ], collectDir.excludePatterns) collectDir.excludePatterns.append("more") self.failUnlessEqual(["valid", "more", ], collectDir.excludePatterns) def testConstructor_032(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) collectDir.excludePatterns = ["valid", "more", ] self.failUnlessEqual(["valid", "more", ], collectDir.excludePatterns) collectDir.excludePatterns.insert(1, "bogus") self.failUnlessEqual(["valid", "bogus", "more", ], collectDir.excludePatterns) def testConstructor_033(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) self.failUnlessAssignRaises(ValueError, collectDir, "excludePatterns", ["*.jpg", ]) self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_034(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) self.failUnlessAssignRaises(ValueError, collectDir, "excludePatterns", ["*.jpg", "*", ]) self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_035(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.excludePatterns) self.failUnlessAssignRaises(ValueError, collectDir, "excludePatterns", ["*.jpg", "valid", ]) self.failUnlessEqual(None, collectDir.excludePatterns) def testConstructor_036(self): """ Test assignment of linkDepth attribute, None value. """ collectDir = CollectDir(linkDepth=1) self.failUnlessEqual(1, collectDir.linkDepth) collectDir.linkDepth = None self.failUnlessEqual(None, collectDir.linkDepth) def testConstructor_037(self): """ Test assignment of linkDepth attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.linkDepth) collectDir.linkDepth = 1 self.failUnlessEqual(1, collectDir.linkDepth) def testConstructor_038(self): """ Test assignment of linkDepth attribute, invalid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.linkDepth) self.failUnlessAssignRaises(ValueError, collectDir, "linkDepth", "ken") self.failUnlessEqual(None, collectDir.linkDepth) def testConstructor_039(self): """ Test assignment of dereference attribute, None value. """ collectDir = CollectDir(dereference=True) self.failUnlessEqual(True, collectDir.dereference) collectDir.dereference = None self.failUnlessEqual(False, collectDir.dereference) def testConstructor_040(self): """ Test assignment of dereference attribute, valid value (real boolean). """ collectDir = CollectDir() self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = True self.failUnlessEqual(True, collectDir.dereference) collectDir.dereference = False self.failUnlessEqual(False, collectDir.dereference) #pylint: disable=R0204 def testConstructor_041(self): """ Test assignment of dereference attribute, valid value (expression). """ collectDir = CollectDir() self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = 0 self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = [] self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = None self.failUnlessEqual(False, collectDir.dereference) collectDir.dereference = ['a'] self.failUnlessEqual(True, collectDir.dereference) collectDir.dereference = 3 self.failUnlessEqual(True, collectDir.dereference) def testConstructor_042(self): """ Test assignment of recursionLevel attribute, None value. """ collectDir = CollectDir(recursionLevel=1) self.failUnlessEqual(1, collectDir.recursionLevel) collectDir.recursionLevel = None self.failUnlessEqual(None, collectDir.recursionLevel) def testConstructor_043(self): """ Test assignment of recursionLevel attribute, valid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.recursionLevel) collectDir.recursionLevel = 1 self.failUnlessEqual(1, collectDir.recursionLevel) def testConstructor_044(self): """ Test assignment of recursionLevel attribute, invalid value. """ collectDir = CollectDir() self.failUnlessEqual(None, collectDir.recursionLevel) self.failUnlessAssignRaises(ValueError, collectDir, "recursionLevel", "ken") self.failUnlessEqual(None, collectDir.recursionLevel) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ collectDir1 = CollectDir() collectDir2 = CollectDir() self.failUnlessEqual(collectDir1, collectDir2) self.failUnless(collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(not collectDir1 != collectDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failUnless(collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(not collectDir1 != collectDir2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/one", ], ["two", ], ["three", ], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/one", ], ["two", ], ["three", ], 1, True, 6) self.failUnless(collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(not collectDir1 != collectDir2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(absolutePath="/whatever") self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_005(self): """ Test comparison of two differing objects, absolutePath differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/stuff", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(collectMode="incr") self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_007(self): """ Test comparison of two differing objects, collectMode differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "daily", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_008(self): """ Test comparison of two differing objects, archiveMode differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(archiveMode="tar") self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_009(self): """ Test comparison of two differing objects, archiveMode differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "targz", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_010(self): """ Test comparison of two differing objects, ignoreFile differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(ignoreFile="ignore") self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_011(self): """ Test comparison of two differing objects, ignoreFile differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_012(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(absoluteExcludePaths=[]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_013(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one not empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(absoluteExcludePaths=["/whatever", ]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_014(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one empty, one not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/whatever", ], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_015(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (both not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/stuff", ], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", ["/stuff", "/something", ], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) # note: different than standard due to unsorted list self.failUnless(not collectDir1 <= collectDir2) # note: different than standard due to unsorted list self.failUnless(collectDir1 > collectDir2) # note: different than standard due to unsorted list self.failUnless(collectDir1 >= collectDir2) # note: different than standard due to unsorted list self.failUnless(collectDir1 != collectDir2) def testComparison_016(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(relativeExcludePaths=[]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_017(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one None, one not empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(relativeExcludePaths=["stuff", "other", ]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_018(self): """ Test comparison of two differing objects, relativeExcludePaths differs (one empty, one not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], ["one", ], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_019(self): """ Test comparison of two differing objects, relativeExcludePaths differs (both not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], ["one", ], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], ["two", ], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_020(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(excludePatterns=[]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_021(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one not empty). """ collectDir1 = CollectDir() collectDir2 = CollectDir(excludePatterns=["one", "two", "three", ]) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_022(self): """ Test comparison of two differing objects, excludePatterns differs (one empty, one not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], ["pattern", ], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_023(self): """ Test comparison of two differing objects, excludePatterns differs (both not empty). """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], ["p1", ], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", ".ignore", [], [], ["p2", ], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_024(self): """ Test comparison of two differing objects, linkDepth differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(linkDepth=1) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_025(self): """ Test comparison of two differing objects, linkDepth differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 2, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_026(self): """ Test comparison of two differing objects, dereference differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(dereference=True) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_027(self): """ Test comparison of two differing objects, dereference differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, False, 6) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_028(self): """ Test comparison of two differing objects, recursionLevel differs (one None). """ collectDir1 = CollectDir() collectDir2 = CollectDir(recursionLevel=1) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(collectDir1 < collectDir2) self.failUnless(collectDir1 <= collectDir2) self.failUnless(not collectDir1 > collectDir2) self.failUnless(not collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) def testComparison_029(self): """ Test comparison of two differing objects, recursionLevel differs. """ collectDir1 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 6) collectDir2 = CollectDir("/etc/whatever", "incr", "tar", "ignore", [], [], [], 1, True, 5) self.failIfEqual(collectDir1, collectDir2) self.failUnless(not collectDir1 == collectDir2) self.failUnless(not collectDir1 < collectDir2) self.failUnless(not collectDir1 <= collectDir2) self.failUnless(collectDir1 > collectDir2) self.failUnless(collectDir1 >= collectDir2) self.failUnless(collectDir1 != collectDir2) ##################### # TestPurgeDir class ##################### class TestPurgeDir(unittest.TestCase): """Tests for the PurgeDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PurgeDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.absolutePath) self.failUnlessEqual(None, purgeDir.retainDays) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ purgeDir = PurgeDir("/whatever", 0) self.failUnlessEqual("/whatever", purgeDir.absolutePath) self.failUnlessEqual(0, purgeDir.retainDays) def testConstructor_003(self): """ Test assignment of absolutePath attribute, None value. """ purgeDir = PurgeDir(absolutePath="/whatever") self.failUnlessEqual("/whatever", purgeDir.absolutePath) purgeDir.absolutePath = None self.failUnlessEqual(None, purgeDir.absolutePath) def testConstructor_004(self): """ Test assignment of absolutePath attribute, valid value. """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.absolutePath) purgeDir.absolutePath = "/etc/whatever" self.failUnlessEqual("/etc/whatever", purgeDir.absolutePath) def testConstructor_005(self): """ Test assignment of absolutePath attribute, invalid value (empty). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.absolutePath) self.failUnlessAssignRaises(ValueError, purgeDir, "absolutePath", "") self.failUnlessEqual(None, purgeDir.absolutePath) def testConstructor_006(self): """ Test assignment of absolutePath attribute, invalid value (non-absolute). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.absolutePath) self.failUnlessAssignRaises(ValueError, purgeDir, "absolutePath", "bogus") self.failUnlessEqual(None, purgeDir.absolutePath) def testConstructor_007(self): """ Test assignment of retainDays attribute, None value. """ purgeDir = PurgeDir(retainDays=12) self.failUnlessEqual(12, purgeDir.retainDays) purgeDir.retainDays = None self.failUnlessEqual(None, purgeDir.retainDays) def testConstructor_008(self): """ Test assignment of retainDays attribute, valid value (integer). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) purgeDir.retainDays = 12 self.failUnlessEqual(12, purgeDir.retainDays) def testConstructor_009(self): """ Test assignment of retainDays attribute, valid value (string representing integer). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) purgeDir.retainDays = "12" self.failUnlessEqual(12, purgeDir.retainDays) def testConstructor_010(self): """ Test assignment of retainDays attribute, invalid value (empty string). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) self.failUnlessAssignRaises(ValueError, purgeDir, "retainDays", "") self.failUnlessEqual(None, purgeDir.retainDays) def testConstructor_011(self): """ Test assignment of retainDays attribute, invalid value (non-integer, like a list). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) self.failUnlessAssignRaises(ValueError, purgeDir, "retainDays", []) self.failUnlessEqual(None, purgeDir.retainDays) def testConstructor_012(self): """ Test assignment of retainDays attribute, invalid value (string representing non-integer). """ purgeDir = PurgeDir() self.failUnlessEqual(None, purgeDir.retainDays) self.failUnlessAssignRaises(ValueError, purgeDir, "retainDays", "blech") self.failUnlessEqual(None, purgeDir.retainDays) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ purgeDir1 = PurgeDir() purgeDir2 = PurgeDir() self.failUnlessEqual(purgeDir1, purgeDir2) self.failUnless(purgeDir1 == purgeDir2) self.failUnless(not purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(purgeDir1 >= purgeDir2) self.failUnless(not purgeDir1 != purgeDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ purgeDir1 = PurgeDir("/etc/whatever", 12) purgeDir2 = PurgeDir("/etc/whatever", 12) self.failUnless(purgeDir1 == purgeDir2) self.failUnless(not purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(purgeDir1 >= purgeDir2) self.failUnless(not purgeDir1 != purgeDir2) def testComparison_003(self): """ Test comparison of two differing objects, absolutePath differs (one None). """ purgeDir1 = PurgeDir() purgeDir2 = PurgeDir(absolutePath="/whatever") self.failIfEqual(purgeDir1, purgeDir2) self.failUnless(not purgeDir1 == purgeDir2) self.failUnless(purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(not purgeDir1 >= purgeDir2) self.failUnless(purgeDir1 != purgeDir2) def testComparison_004(self): """ Test comparison of two differing objects, absolutePath differs. """ purgeDir1 = PurgeDir("/etc/blech", 12) purgeDir2 = PurgeDir("/etc/whatever", 12) self.failIfEqual(purgeDir1, purgeDir2) self.failUnless(not purgeDir1 == purgeDir2) self.failUnless(purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(not purgeDir1 >= purgeDir2) self.failUnless(purgeDir1 != purgeDir2) def testComparison_005(self): """ Test comparison of two differing objects, retainDays differs (one None). """ purgeDir1 = PurgeDir() purgeDir2 = PurgeDir(retainDays=365) self.failIfEqual(purgeDir1, purgeDir2) self.failUnless(not purgeDir1 == purgeDir2) self.failUnless(purgeDir1 < purgeDir2) self.failUnless(purgeDir1 <= purgeDir2) self.failUnless(not purgeDir1 > purgeDir2) self.failUnless(not purgeDir1 >= purgeDir2) self.failUnless(purgeDir1 != purgeDir2) def testComparison_006(self): """ Test comparison of two differing objects, retainDays differs. """ purgeDir1 = PurgeDir("/etc/whatever", 365) purgeDir2 = PurgeDir("/etc/whatever", 12) self.failIfEqual(purgeDir1, purgeDir2) self.failUnless(not purgeDir1 == purgeDir2) self.failUnless(not purgeDir1 < purgeDir2) self.failUnless(not purgeDir1 <= purgeDir2) self.failUnless(purgeDir1 > purgeDir2) self.failUnless(purgeDir1 >= purgeDir2) self.failUnless(purgeDir1 != purgeDir2) ###################### # TestLocalPeer class ###################### class TestLocalPeer(unittest.TestCase): """Tests for the LocalPeer class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalPeer() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.name) self.failUnlessEqual(None, localPeer.collectDir) self.failUnlessEqual(None, localPeer.ignoreFailureMode) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ localPeer = LocalPeer("myname", "/whatever", "all") self.failUnlessEqual("myname", localPeer.name) self.failUnlessEqual("/whatever", localPeer.collectDir) self.failUnlessEqual("all", localPeer.ignoreFailureMode) def testConstructor_003(self): """ Test assignment of name attribute, None value. """ localPeer = LocalPeer(name="myname") self.failUnlessEqual("myname", localPeer.name) localPeer.name = None self.failUnlessEqual(None, localPeer.name) def testConstructor_004(self): """ Test assignment of name attribute, valid value. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.name) localPeer.name = "myname" self.failUnlessEqual("myname", localPeer.name) def testConstructor_005(self): """ Test assignment of name attribute, invalid value (empty). """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.name) self.failUnlessAssignRaises(ValueError, localPeer, "name", "") self.failUnlessEqual(None, localPeer.name) def testConstructor_006(self): """ Test assignment of collectDir attribute, None value. """ localPeer = LocalPeer(collectDir="/whatever") self.failUnlessEqual("/whatever", localPeer.collectDir) localPeer.collectDir = None self.failUnlessEqual(None, localPeer.collectDir) def testConstructor_007(self): """ Test assignment of collectDir attribute, valid value. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.collectDir) localPeer.collectDir = "/etc/stuff" self.failUnlessEqual("/etc/stuff", localPeer.collectDir) def testConstructor_008(self): """ Test assignment of collectDir attribute, invalid value (empty). """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.collectDir) self.failUnlessAssignRaises(ValueError, localPeer, "collectDir", "") self.failUnlessEqual(None, localPeer.collectDir) def testConstructor_009(self): """ Test assignment of collectDir attribute, invalid value (non-absolute). """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.collectDir) self.failUnlessAssignRaises(ValueError, localPeer, "collectDir", "bogus") self.failUnlessEqual(None, localPeer.collectDir) def testConstructor_010(self): """ Test assignment of ignoreFailureMode attribute, valid values. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "none" self.failUnlessEqual("none", localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "all" self.failUnlessEqual("all", localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "daily" self.failUnlessEqual("daily", localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = "weekly" self.failUnlessEqual("weekly", localPeer.ignoreFailureMode) def testConstructor_011(self): """ Test assignment of ignoreFailureMode attribute, invalid value. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, localPeer, "ignoreFailureMode", "bogus") def testConstructor_012(self): """ Test assignment of ignoreFailureMode attribute, None value. """ localPeer = LocalPeer() self.failUnlessEqual(None, localPeer.ignoreFailureMode) localPeer.ignoreFailureMode = None self.failUnlessEqual(None, localPeer.ignoreFailureMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ localPeer1 = LocalPeer() localPeer2 = LocalPeer() self.failUnlessEqual(localPeer1, localPeer2) self.failUnless(localPeer1 == localPeer2) self.failUnless(not localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(localPeer1 >= localPeer2) self.failUnless(not localPeer1 != localPeer2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ localPeer1 = LocalPeer("myname", "/etc/stuff", "all") localPeer2 = LocalPeer("myname", "/etc/stuff", "all") self.failUnless(localPeer1 == localPeer2) self.failUnless(not localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(localPeer1 >= localPeer2) self.failUnless(not localPeer1 != localPeer2) def testComparison_003(self): """ Test comparison of two differing objects, name differs (one None). """ localPeer1 = LocalPeer() localPeer2 = LocalPeer(name="blech") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(not localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_004(self): """ Test comparison of two differing objects, name differs. """ localPeer1 = LocalPeer("name", "/etc/stuff", "all") localPeer2 = LocalPeer("name", "/etc/whatever", "all") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(not localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_005(self): """ Test comparison of two differing objects, collectDir differs (one None). """ localPeer1 = LocalPeer() localPeer2 = LocalPeer(collectDir="/etc/whatever") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(not localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_006(self): """ Test comparison of two differing objects, collectDir differs. """ localPeer1 = LocalPeer("name2", "/etc/stuff", "all") localPeer2 = LocalPeer("name1", "/etc/stuff", "all") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(not localPeer1 < localPeer2) self.failUnless(not localPeer1 <= localPeer2) self.failUnless(localPeer1 > localPeer2) self.failUnless(localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_008(self): """ Test comparison of two differing objects, ignoreFailureMode differs (one None). """ localPeer1 = LocalPeer() localPeer2 = LocalPeer(ignoreFailureMode="all") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(localPeer1 < localPeer2) self.failUnless(localPeer1 <= localPeer2) self.failUnless(not localPeer1 > localPeer2) self.failUnless(not localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) def testComparison_009(self): """ Test comparison of two differing objects, collectDir differs. """ localPeer1 = LocalPeer("name1", "/etc/stuff", "none") localPeer2 = LocalPeer("name1", "/etc/stuff", "all") self.failIfEqual(localPeer1, localPeer2) self.failUnless(not localPeer1 == localPeer2) self.failUnless(not localPeer1 < localPeer2) self.failUnless(not localPeer1 <= localPeer2) self.failUnless(localPeer1 > localPeer2) self.failUnless(localPeer1 >= localPeer2) self.failUnless(localPeer1 != localPeer2) ####################### # TestRemotePeer class ####################### class TestRemotePeer(unittest.TestCase): """Tests for the RemotePeer class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = RemotePeer() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.name) self.failUnlessEqual(None, remotePeer.collectDir) self.failUnlessEqual(None, remotePeer.remoteUser) self.failUnlessEqual(None, remotePeer.rcpCommand) self.failUnlessEqual(None, remotePeer.rshCommand) self.failUnlessEqual(None, remotePeer.cbackCommand) self.failUnlessEqual(False, remotePeer.managed) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessEqual(None, remotePeer.ignoreFailureMode) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ remotePeer = RemotePeer("myname", "/stuff", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failUnlessEqual("myname", remotePeer.name) self.failUnlessEqual("/stuff", remotePeer.collectDir) self.failUnlessEqual("backup", remotePeer.remoteUser) self.failUnlessEqual("scp -1 -B", remotePeer.rcpCommand) self.failUnlessEqual("ssh", remotePeer.rshCommand) self.failUnlessEqual("cback", remotePeer.cbackCommand) self.failUnlessEqual(True, remotePeer.managed) self.failUnlessEqual(["collect", ], remotePeer.managedActions) self.failUnlessEqual("all", remotePeer.ignoreFailureMode) def testConstructor_003(self): """ Test assignment of name attribute, None value. """ remotePeer = RemotePeer(name="myname") self.failUnlessEqual("myname", remotePeer.name) remotePeer.name = None self.failUnlessEqual(None, remotePeer.name) def testConstructor_004(self): """ Test assignment of name attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.name) remotePeer.name = "namename" self.failUnlessEqual("namename", remotePeer.name) def testConstructor_005(self): """ Test assignment of name attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.name) self.failUnlessAssignRaises(ValueError, remotePeer, "name", "") self.failUnlessEqual(None, remotePeer.name) def testConstructor_006(self): """ Test assignment of collectDir attribute, None value. """ remotePeer = RemotePeer(collectDir="/etc/stuff") self.failUnlessEqual("/etc/stuff", remotePeer.collectDir) remotePeer.collectDir = None self.failUnlessEqual(None, remotePeer.collectDir) def testConstructor_007(self): """ Test assignment of collectDir attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.collectDir) remotePeer.collectDir = "/tmp" self.failUnlessEqual("/tmp", remotePeer.collectDir) def testConstructor_008(self): """ Test assignment of collectDir attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.collectDir) self.failUnlessAssignRaises(ValueError, remotePeer, "collectDir", "") self.failUnlessEqual(None, remotePeer.collectDir) def testConstructor_009(self): """ Test assignment of collectDir attribute, invalid value (non-absolute). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.collectDir) self.failUnlessAssignRaises(ValueError, remotePeer, "collectDir", "bogus/stuff/there") self.failUnlessEqual(None, remotePeer.collectDir) def testConstructor_010(self): """ Test assignment of remoteUser attribute, None value. """ remotePeer = RemotePeer(remoteUser="spot") self.failUnlessEqual("spot", remotePeer.remoteUser) remotePeer.remoteUser = None self.failUnlessEqual(None, remotePeer.remoteUser) def testConstructor_011(self): """ Test assignment of remoteUser attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.remoteUser) remotePeer.remoteUser = "spot" self.failUnlessEqual("spot", remotePeer.remoteUser) def testConstructor_012(self): """ Test assignment of remoteUser attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.remoteUser) self.failUnlessAssignRaises(ValueError, remotePeer, "remoteUser", "") self.failUnlessEqual(None, remotePeer.remoteUser) def testConstructor_013(self): """ Test assignment of rcpCommand attribute, None value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rcpCommand) remotePeer.rcpCommand = "scp" self.failUnlessEqual("scp", remotePeer.rcpCommand) def testConstructor_014(self): """ Test assignment of rcpCommand attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rcpCommand) remotePeer.rcpCommand = "scp" self.failUnlessEqual("scp", remotePeer.rcpCommand) def testConstructor_015(self): """ Test assignment of rcpCommand attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rcpCommand) self.failUnlessAssignRaises(ValueError, remotePeer, "rcpCommand", "") self.failUnlessEqual(None, remotePeer.rcpCommand) def testConstructor_016(self): """ Test assignment of rshCommand attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rshCommand) remotePeer.rshCommand = "scp" self.failUnlessEqual("scp", remotePeer.rshCommand) def testConstructor_017(self): """ Test assignment of rshCommand attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.rshCommand) self.failUnlessAssignRaises(ValueError, remotePeer, "rshCommand", "") self.failUnlessEqual(None, remotePeer.rshCommand) def testConstructor_018(self): """ Test assignment of cbackCommand attribute, valid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.cbackCommand) remotePeer.cbackCommand = "scp" self.failUnlessEqual("scp", remotePeer.cbackCommand) def testConstructor_019(self): """ Test assignment of cbackCommand attribute, invalid value (empty). """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.cbackCommand) self.failUnlessAssignRaises(ValueError, remotePeer, "cbackCommand", "") self.failUnlessEqual(None, remotePeer.cbackCommand) def testConstructor_021(self): """ Test assignment of managed attribute, None value. """ remotePeer = RemotePeer(managed=True) self.failUnlessEqual(True, remotePeer.managed) remotePeer.managed = None self.failUnlessEqual(False, remotePeer.managed) def testConstructor_022(self): """ Test assignment of managed attribute, valid value (real boolean). """ remotePeer = RemotePeer() self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = True self.failUnlessEqual(True, remotePeer.managed) remotePeer.managed = False self.failUnlessEqual(False, remotePeer.managed) #pylint: disable=R0204 def testConstructor_023(self): """ Test assignment of managed attribute, valid value (expression). """ remotePeer = RemotePeer() self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = 0 self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = [] self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = None self.failUnlessEqual(False, remotePeer.managed) remotePeer.managed = ['a'] self.failUnlessEqual(True, remotePeer.managed) remotePeer.managed = 3 self.failUnlessEqual(True, remotePeer.managed) def testConstructor_024(self): """ Test assignment of managedActions attribute, None value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) remotePeer.managedActions = None self.failUnlessEqual(None, remotePeer.managedActions) def testConstructor_025(self): """ Test assignment of managedActions attribute, empty list. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) remotePeer.managedActions = [] self.failUnlessEqual([], remotePeer.managedActions) def testConstructor_026(self): """ Test assignment of managedActions attribute, non-empty list, valid values. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) remotePeer.managedActions = ['a', 'b', ] self.failUnlessEqual(['a', 'b'], remotePeer.managedActions) def testConstructor_027(self): """ Test assignment of managedActions attribute, non-empty list, invalid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["KEN", ]) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["hello, world" ]) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["dash-word", ]) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["", ]) self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", [None, ]) self.failUnlessEqual(None, remotePeer.managedActions) def testConstructor_028(self): """ Test assignment of managedActions attribute, non-empty list, mixed values. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.managedActions) self.failUnlessAssignRaises(ValueError, remotePeer, "managedActions", ["ken", "dash-word", ]) def testConstructor_029(self): """ Test assignment of ignoreFailureMode attribute, valid values. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "none" self.failUnlessEqual("none", remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "all" self.failUnlessEqual("all", remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "daily" self.failUnlessEqual("daily", remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = "weekly" self.failUnlessEqual("weekly", remotePeer.ignoreFailureMode) def testConstructor_030(self): """ Test assignment of ignoreFailureMode attribute, invalid value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.ignoreFailureMode) self.failUnlessAssignRaises(ValueError, remotePeer, "ignoreFailureMode", "bogus") def testConstructor_031(self): """ Test assignment of ignoreFailureMode attribute, None value. """ remotePeer = RemotePeer() self.failUnlessEqual(None, remotePeer.ignoreFailureMode) remotePeer.ignoreFailureMode = None self.failUnlessEqual(None, remotePeer.ignoreFailureMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer() self.failUnlessEqual(remotePeer1, remotePeer2) self.failUnless(remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(not remotePeer1 != remotePeer2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failUnless(remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(not remotePeer1 != remotePeer2) def testComparison_003(self): """ Test comparison of two differing objects, name differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(name="name") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_004(self): """ Test comparison of two differing objects, name differs. """ remotePeer1 = RemotePeer("name1", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name2", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_005(self): """ Test comparison of two differing objects, collectDir differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(collectDir="/tmp") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_006(self): """ Test comparison of two differing objects, collectDir differs. """ remotePeer1 = RemotePeer("name", "/etc", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_007(self): """ Test comparison of two differing objects, remoteUser differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(remoteUser="spot") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_008(self): """ Test comparison of two differing objects, remoteUser differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "spot", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_009(self): """ Test comparison of two differing objects, rcpCommand differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(rcpCommand="scp") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_010(self): """ Test comparison of two differing objects, rcpCommand differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -2 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_011(self): """ Test comparison of two differing objects, rshCommand differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(rshCommand="ssh") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_012(self): """ Test comparison of two differing objects, rshCommand differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh2", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh1", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_013(self): """ Test comparison of two differing objects, cbackCommand differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(cbackCommand="cback") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_014(self): """ Test comparison of two differing objects, cbackCommand differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback2", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback1", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_015(self): """ Test comparison of two differing objects, managed differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(managed=True) self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_016(self): """ Test comparison of two differing objects, managed differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", False, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_017(self): """ Test comparison of two differing objects, managedActions differs (one None, one empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, None, "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_018(self): """ Test comparison of two differing objects, managedActions differs (one None, one not empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, None, "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_019(self): """ Test comparison of two differing objects, managedActions differs (one empty, one not empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [], "all" ) remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_020(self): """ Test comparison of two differing objects, managedActions differs (both not empty). """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "purge", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(not remotePeer1 < remotePeer2) self.failUnless(not remotePeer1 <= remotePeer2) self.failUnless(remotePeer1 > remotePeer2) self.failUnless(remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_021(self): """ Test comparison of two differing objects, ignoreFailureMode differs (one None). """ remotePeer1 = RemotePeer() remotePeer2 = RemotePeer(ignoreFailureMode="all") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) def testComparison_022(self): """ Test comparison of two differing objects, ignoreFailureMode differs. """ remotePeer1 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "all") remotePeer2 = RemotePeer("name", "/etc/stuff/tmp/X11", "backup", "scp -1 -B", "ssh", "cback", True, [ "collect", ], "none") self.failIfEqual(remotePeer1, remotePeer2) self.failUnless(not remotePeer1 == remotePeer2) self.failUnless(remotePeer1 < remotePeer2) self.failUnless(remotePeer1 <= remotePeer2) self.failUnless(not remotePeer1 > remotePeer2) self.failUnless(not remotePeer1 >= remotePeer2) self.failUnless(remotePeer1 != remotePeer2) ############################ # TestReferenceConfig class ############################ class TestReferenceConfig(unittest.TestCase): """Tests for the ReferenceConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ReferenceConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.author) self.failUnlessEqual(None, reference.revision) self.failUnlessEqual(None, reference.description) self.failUnlessEqual(None, reference.generator) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ reference = ReferenceConfig("one", "two", "three", "four") self.failUnlessEqual("one", reference.author) self.failUnlessEqual("two", reference.revision) self.failUnlessEqual("three", reference.description) self.failUnlessEqual("four", reference.generator) def testConstructor_003(self): """ Test assignment of author attribute, None value. """ reference = ReferenceConfig(author="one") self.failUnlessEqual("one", reference.author) reference.author = None self.failUnlessEqual(None, reference.author) def testConstructor_004(self): """ Test assignment of author attribute, valid value. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.author) reference.author = "one" self.failUnlessEqual("one", reference.author) def testConstructor_005(self): """ Test assignment of author attribute, valid value (empty). """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.author) reference.author = "" self.failUnlessEqual("", reference.author) def testConstructor_006(self): """ Test assignment of revision attribute, None value. """ reference = ReferenceConfig(revision="one") self.failUnlessEqual("one", reference.revision) reference.revision = None self.failUnlessEqual(None, reference.revision) def testConstructor_007(self): """ Test assignment of revision attribute, valid value. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.revision) reference.revision = "one" self.failUnlessEqual("one", reference.revision) def testConstructor_008(self): """ Test assignment of revision attribute, valid value (empty). """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.revision) reference.revision = "" self.failUnlessEqual("", reference.revision) def testConstructor_009(self): """ Test assignment of description attribute, None value. """ reference = ReferenceConfig(description="one") self.failUnlessEqual("one", reference.description) reference.description = None self.failUnlessEqual(None, reference.description) def testConstructor_010(self): """ Test assignment of description attribute, valid value. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.description) reference.description = "one" self.failUnlessEqual("one", reference.description) def testConstructor_011(self): """ Test assignment of description attribute, valid value (empty). """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.description) reference.description = "" self.failUnlessEqual("", reference.description) def testConstructor_012(self): """ Test assignment of generator attribute, None value. """ reference = ReferenceConfig(generator="one") self.failUnlessEqual("one", reference.generator) reference.generator = None self.failUnlessEqual(None, reference.generator) def testConstructor_013(self): """ Test assignment of generator attribute, valid value. """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.generator) reference.generator = "one" self.failUnlessEqual("one", reference.generator) def testConstructor_014(self): """ Test assignment of generator attribute, valid value (empty). """ reference = ReferenceConfig() self.failUnlessEqual(None, reference.generator) reference.generator = "" self.failUnlessEqual("", reference.generator) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ reference1 = ReferenceConfig() reference2 = ReferenceConfig() self.failUnlessEqual(reference1, reference2) self.failUnless(reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(not reference1 != reference2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "three", "four") self.failUnless(reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(not reference1 != reference2) def testComparison_003(self): """ Test comparison of two differing objects, author differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(author="one") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_004(self): """ Test comparison of two differing objects, author differs (one empty). """ reference1 = ReferenceConfig("", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_005(self): """ Test comparison of two differing objects, author differs. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("author", "two", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(not reference1 <= reference2) self.failUnless(reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_006(self): """ Test comparison of two differing objects, revision differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(revision="one") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_007(self): """ Test comparison of two differing objects, revision differs (one empty). """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(not reference1 <= reference2) self.failUnless(reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_008(self): """ Test comparison of two differing objects, revision differs. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "revision", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(not reference1 <= reference2) self.failUnless(reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_009(self): """ Test comparison of two differing objects, description differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(description="one") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_010(self): """ Test comparison of two differing objects, description differs (one empty). """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(not reference1 < reference2) self.failUnless(not reference1 <= reference2) self.failUnless(reference1 > reference2) self.failUnless(reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_011(self): """ Test comparison of two differing objects, description differs. """ reference1 = ReferenceConfig("one", "two", "description", "four") reference2 = ReferenceConfig("one", "two", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_012(self): """ Test comparison of two differing objects, generator differs (one None). """ reference1 = ReferenceConfig() reference2 = ReferenceConfig(generator="one") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_013(self): """ Test comparison of two differing objects, generator differs (one empty). """ reference1 = ReferenceConfig("one", "two", "three", "") reference2 = ReferenceConfig("one", "two", "three", "four") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) def testComparison_014(self): """ Test comparison of two differing objects, generator differs. """ reference1 = ReferenceConfig("one", "two", "three", "four") reference2 = ReferenceConfig("one", "two", "three", "generator") self.failIfEqual(reference1, reference2) self.failUnless(not reference1 == reference2) self.failUnless(reference1 < reference2) self.failUnless(reference1 <= reference2) self.failUnless(not reference1 > reference2) self.failUnless(not reference1 >= reference2) self.failUnless(reference1 != reference2) ############################# # TestExtensionsConfig class ############################# class TestExtensionsConfig(unittest.TestCase): """Tests for the ExtensionsConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = ExtensionsConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty list), positional arguments. """ extensions = ExtensionsConfig([], None) self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual([], extensions.actions) extensions = ExtensionsConfig([], "index") self.failUnlessEqual("index", extensions.orderMode) self.failUnlessEqual([], extensions.actions) extensions = ExtensionsConfig([], "dependency") self.failUnlessEqual("dependency", extensions.orderMode) self.failUnlessEqual([], extensions.actions) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty list), named arguments. """ extensions = ExtensionsConfig(orderMode=None, actions=[ExtendedAction(), ]) self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual([ExtendedAction(), ], extensions.actions) extensions = ExtensionsConfig(orderMode="index", actions=[ExtendedAction(), ]) self.failUnlessEqual("index", extensions.orderMode) self.failUnlessEqual([ExtendedAction(), ], extensions.actions) extensions = ExtensionsConfig(orderMode="dependency", actions=[ExtendedAction(), ]) self.failUnlessEqual("dependency", extensions.orderMode) self.failUnlessEqual([ExtendedAction(), ], extensions.actions) def testConstructor_004(self): """ Test assignment of actions attribute, None value. """ extensions = ExtensionsConfig([]) self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual([], extensions.actions) extensions.actions = None self.failUnlessEqual(None, extensions.actions) def testConstructor_005(self): """ Test assignment of actions attribute, [] value. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.actions = [] self.failUnlessEqual([], extensions.actions) def testConstructor_006(self): """ Test assignment of actions attribute, single valid entry. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.actions = [ExtendedAction(), ] self.failUnlessEqual([ExtendedAction(), ], extensions.actions) def testConstructor_007(self): """ Test assignment of actions attribute, multiple valid entries. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.actions = [ExtendedAction("a", "b", "c", 1), ExtendedAction("d", "e", "f", 2), ] self.failUnlessEqual([ExtendedAction("a", "b", "c", 1), ExtendedAction("d", "e", "f", 2), ], extensions.actions) def testConstructor_009(self): """ Test assignment of actions attribute, single invalid entry (not an ExtendedAction). """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) self.failUnlessAssignRaises(ValueError, extensions, "actions", [ RemotePeer(), ]) self.failUnlessEqual(None, extensions.actions) def testConstructor_010(self): """ Test assignment of actions attribute, mixed valid and invalid entries. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) self.failUnlessAssignRaises(ValueError, extensions, "actions", [ ExtendedAction(), RemotePeer(), ]) self.failUnlessEqual(None, extensions.actions) def testConstructor_011(self): """ Test assignment of orderMode attribute, None value. """ extensions = ExtensionsConfig(orderMode="index") self.failUnlessEqual("index", extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.orderMode = None self.failUnlessEqual(None, extensions.orderMode) def testConstructor_012(self): """ Test assignment of orderMode attribute, valid values. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) extensions.orderMode = "index" self.failUnlessEqual("index", extensions.orderMode) extensions.orderMode = "dependency" self.failUnlessEqual("dependency", extensions.orderMode) def testConstructor_013(self): """ Test assignment of orderMode attribute, invalid values. """ extensions = ExtensionsConfig() self.failUnlessEqual(None, extensions.orderMode) self.failUnlessEqual(None, extensions.actions) self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "bogus") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "indexes") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "indices") self.failUnlessAssignRaises(ValueError, extensions, "orderMode", "dependencies") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ extensions1 = ExtensionsConfig() extensions2 = ExtensionsConfig() self.failUnlessEqual(extensions1, extensions2) self.failUnless(extensions1 == extensions2) self.failUnless(not extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(extensions1 >= extensions2) self.failUnless(not extensions1 != extensions2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ extensions1 = ExtensionsConfig([], "index") extensions2 = ExtensionsConfig([], "index") self.failUnlessEqual(extensions1, extensions2) self.failUnless(extensions1 == extensions2) self.failUnless(not extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(extensions1 >= extensions2) self.failUnless(not extensions1 != extensions2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ extensions1 = ExtensionsConfig([ExtendedAction(), ], "index") extensions2 = ExtensionsConfig([ExtendedAction(), ], "index") self.failUnlessEqual(extensions1, extensions2) self.failUnless(extensions1 == extensions2) self.failUnless(not extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(extensions1 >= extensions2) self.failUnless(not extensions1 != extensions2) def testComparison_004(self): """ Test comparison of two differing objects, actions differs (one None, one empty). """ extensions1 = ExtensionsConfig(None) extensions2 = ExtensionsConfig([]) self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_005(self): """ Test comparison of two differing objects, actions differs (one None, one not empty). """ extensions1 = ExtensionsConfig(None) extensions2 = ExtensionsConfig([ExtendedAction(), ]) self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_006(self): """ Test comparison of two differing objects, actions differs (one empty, one not empty). """ extensions1 = ExtensionsConfig([]) extensions2 = ExtensionsConfig([ExtendedAction(), ]) self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_007(self): """ Test comparison of two differing objects, actions differs (both not empty). """ extensions1 = ExtensionsConfig([ExtendedAction(name="one"), ]) extensions2 = ExtensionsConfig([ExtendedAction(name="two"), ]) self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_008(self): """ Test comparison of differing objects, orderMode differs (one None). """ extensions1 = ExtensionsConfig([], None) extensions2 = ExtensionsConfig([], "index") self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) def testComparison_009(self): """ Test comparison of differing objects, orderMode differs. """ extensions1 = ExtensionsConfig([], "dependency") extensions2 = ExtensionsConfig([], "index") self.failIfEqual(extensions1, extensions2) self.failUnless(not extensions1 == extensions2) self.failUnless(extensions1 < extensions2) self.failUnless(extensions1 <= extensions2) self.failUnless(not extensions1 > extensions2) self.failUnless(not extensions1 >= extensions2) self.failUnless(extensions1 != extensions2) ########################## # TestOptionsConfig class ########################## class TestOptionsConfig(unittest.TestCase): """Tests for the OptionsConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = OptionsConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ options = OptionsConfig() self.failUnlessEqual(None, options.startingDay) self.failUnlessEqual(None, options.workingDir) self.failUnlessEqual(None, options.backupUser) self.failUnlessEqual(None, options.backupGroup) self.failUnlessEqual(None, options.rcpCommand) self.failUnlessEqual(None, options.rshCommand) self.failUnlessEqual(None, options.cbackCommand) self.failUnlessEqual(None, options.overrides) self.failUnlessEqual(None, options.hooks) self.failUnlessEqual(None, options.managedActions) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (lists empty). """ options = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", [], [], "ssh", "cback", []) self.failUnlessEqual("monday", options.startingDay) self.failUnlessEqual("/tmp", options.workingDir) self.failUnlessEqual("user", options.backupUser) self.failUnlessEqual("group", options.backupGroup) self.failUnlessEqual("scp -1 -B", options.rcpCommand) self.failUnlessEqual("ssh", options.rshCommand) self.failUnlessEqual("cback", options.cbackCommand) self.failUnlessEqual([], options.overrides) self.failUnlessEqual([], options.hooks) self.failUnlessEqual([], options.managedActions) def testConstructor_003(self): """ Test assignment of startingDay attribute, None value. """ options = OptionsConfig(startingDay="monday") self.failUnlessEqual("monday", options.startingDay) options.startingDay = None self.failUnlessEqual(None, options.startingDay) def testConstructor_004(self): """ Test assignment of startingDay attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.startingDay) options.startingDay = "monday" self.failUnlessEqual("monday", options.startingDay) options.startingDay = "tuesday" self.failUnlessEqual("tuesday", options.startingDay) options.startingDay = "wednesday" self.failUnlessEqual("wednesday", options.startingDay) options.startingDay = "thursday" self.failUnlessEqual("thursday", options.startingDay) options.startingDay = "friday" self.failUnlessEqual("friday", options.startingDay) options.startingDay = "saturday" self.failUnlessEqual("saturday", options.startingDay) options.startingDay = "sunday" self.failUnlessEqual("sunday", options.startingDay) def testConstructor_005(self): """ Test assignment of startingDay attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.startingDay) self.failUnlessAssignRaises(ValueError, options, "startingDay", "") self.failUnlessEqual(None, options.startingDay) def testConstructor_006(self): """ Test assignment of startingDay attribute, invalid value (not in list). """ options = OptionsConfig() self.failUnlessEqual(None, options.startingDay) self.failUnlessAssignRaises(ValueError, options, "startingDay", "dienstag") # ha, ha, pretend I'm German self.failUnlessEqual(None, options.startingDay) def testConstructor_007(self): """ Test assignment of workingDir attribute, None value. """ options = OptionsConfig(workingDir="/tmp") self.failUnlessEqual("/tmp", options.workingDir) options.workingDir = None self.failUnlessEqual(None, options.workingDir) def testConstructor_008(self): """ Test assignment of workingDir attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.workingDir) options.workingDir = "/tmp" self.failUnlessEqual("/tmp", options.workingDir) def testConstructor_009(self): """ Test assignment of workingDir attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.workingDir) self.failUnlessAssignRaises(ValueError, options, "workingDir", "") self.failUnlessEqual(None, options.workingDir) def testConstructor_010(self): """ Test assignment of workingDir attribute, invalid value (non-absolute). """ options = OptionsConfig() self.failUnlessEqual(None, options.workingDir) self.failUnlessAssignRaises(ValueError, options, "workingDir", "stuff") self.failUnlessEqual(None, options.workingDir) def testConstructor_011(self): """ Test assignment of backupUser attribute, None value. """ options = OptionsConfig(backupUser="user") self.failUnlessEqual("user", options.backupUser) options.backupUser = None self.failUnlessEqual(None, options.backupUser) def testConstructor_012(self): """ Test assignment of backupUser attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.backupUser) options.backupUser = "user" self.failUnlessEqual("user", options.backupUser) def testConstructor_013(self): """ Test assignment of backupUser attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.backupUser) self.failUnlessAssignRaises(ValueError, options, "backupUser", "") self.failUnlessEqual(None, options.backupUser) def testConstructor_014(self): """ Test assignment of backupGroup attribute, None value. """ options = OptionsConfig(backupGroup="group") self.failUnlessEqual("group", options.backupGroup) options.backupGroup = None self.failUnlessEqual(None, options.backupGroup) def testConstructor_015(self): """ Test assignment of backupGroup attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.backupGroup) options.backupGroup = "group" self.failUnlessEqual("group", options.backupGroup) def testConstructor_016(self): """ Test assignment of backupGroup attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.backupGroup) self.failUnlessAssignRaises(ValueError, options, "backupGroup", "") self.failUnlessEqual(None, options.backupGroup) def testConstructor_017(self): """ Test assignment of rcpCommand attribute, None value. """ options = OptionsConfig(rcpCommand="command") self.failUnlessEqual("command", options.rcpCommand) options.rcpCommand = None self.failUnlessEqual(None, options.rcpCommand) def testConstructor_018(self): """ Test assignment of rcpCommand attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.rcpCommand) options.rcpCommand = "command" self.failUnlessEqual("command", options.rcpCommand) def testConstructor_019(self): """ Test assignment of rcpCommand attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.rcpCommand) self.failUnlessAssignRaises(ValueError, options, "rcpCommand", "") self.failUnlessEqual(None, options.rcpCommand) def testConstructor_020(self): """ Test constructor with all values filled in, with valid values (lists not empty). """ overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), ] hooks = [ PreActionHook("collect", "ls -l"), ] managedActions = [ "collect", "purge", ] options = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failUnlessEqual("monday", options.startingDay) self.failUnlessEqual("/tmp", options.workingDir) self.failUnlessEqual("user", options.backupUser) self.failUnlessEqual("group", options.backupGroup) self.failUnlessEqual("scp -1 -B", options.rcpCommand) self.failUnlessEqual("ssh", options.rshCommand) self.failUnlessEqual("cback", options.cbackCommand) self.failUnlessEqual(overrides, options.overrides) self.failUnlessEqual(hooks, options.hooks) self.failUnlessEqual(managedActions, options.managedActions) def testConstructor_021(self): """ Test assignment of overrides attribute, None value. """ collect = OptionsConfig(overrides=[]) self.failUnlessEqual([], collect.overrides) collect.overrides = None self.failUnlessEqual(None, collect.overrides) def testConstructor_022(self): """ Test assignment of overrides attribute, [] value. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) collect.overrides = [] self.failUnlessEqual([], collect.overrides) def testConstructor_023(self): """ Test assignment of overrides attribute, single valid entry. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) collect.overrides = [CommandOverride("one", "/one"), ] self.failUnlessEqual([CommandOverride("one", "/one"), ], collect.overrides) def testConstructor_024(self): """ Test assignment of overrides attribute, multiple valid entries. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) collect.overrides = [CommandOverride("one", "/one"), CommandOverride("two", "/two"), ] self.failUnlessEqual([CommandOverride("one", "/one"), CommandOverride("two", "/two"), ], collect.overrides) def testConstructor_025(self): """ Test assignment of overrides attribute, single invalid entry (None). """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) self.failUnlessAssignRaises(ValueError, collect, "overrides", [ None, ]) self.failUnlessEqual(None, collect.overrides) def testConstructor_026(self): """ Test assignment of overrides attribute, single invalid entry (not a CommandOverride). """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) self.failUnlessAssignRaises(ValueError, collect, "overrides", [ "hello", ]) self.failUnlessEqual(None, collect.overrides) def testConstructor_027(self): """ Test assignment of overrides attribute, mixed valid and invalid entries. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.overrides) self.failUnlessAssignRaises(ValueError, collect, "overrides", [ "hello", CommandOverride("one", "/one"), ]) self.failUnlessEqual(None, collect.overrides) def testConstructor_028(self): """ Test assignment of hooks attribute, None value. """ collect = OptionsConfig(hooks=[]) self.failUnlessEqual([], collect.hooks) collect.hooks = None self.failUnlessEqual(None, collect.hooks) def testConstructor_029(self): """ Test assignment of hooks attribute, [] value. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) collect.hooks = [] self.failUnlessEqual([], collect.hooks) def testConstructor_030(self): """ Test assignment of hooks attribute, single valid entry. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) collect.hooks = [PreActionHook("stage", "df -k"), ] self.failUnlessEqual([PreActionHook("stage", "df -k"), ], collect.hooks) def testConstructor_031(self): """ Test assignment of hooks attribute, multiple valid entries. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) collect.hooks = [ PreActionHook("stage", "df -k"), PostActionHook("collect", "ls -l"), ] self.failUnlessEqual([PreActionHook("stage", "df -k"), PostActionHook("collect", "ls -l"), ], collect.hooks) def testConstructor_032(self): """ Test assignment of hooks attribute, single invalid entry (None). """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) self.failUnlessAssignRaises(ValueError, collect, "hooks", [ None, ]) self.failUnlessEqual(None, collect.hooks) def testConstructor_033(self): """ Test assignment of hooks attribute, single invalid entry (not a ActionHook). """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) self.failUnlessAssignRaises(ValueError, collect, "hooks", [ "hello", ]) self.failUnlessEqual(None, collect.hooks) def testConstructor_034(self): """ Test assignment of hooks attribute, mixed valid and invalid entries. """ collect = OptionsConfig() self.failUnlessEqual(None, collect.hooks) self.failUnlessAssignRaises(ValueError, collect, "hooks", [ "hello", PreActionHook("stage", "df -k"), ]) self.failUnlessEqual(None, collect.hooks) def testConstructor_035(self): """ Test assignment of rshCommand attribute, None value. """ options = OptionsConfig(rshCommand="command") self.failUnlessEqual("command", options.rshCommand) options.rshCommand = None self.failUnlessEqual(None, options.rshCommand) def testConstructor_036(self): """ Test assignment of rshCommand attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.rshCommand) options.rshCommand = "command" self.failUnlessEqual("command", options.rshCommand) def testConstructor_037(self): """ Test assignment of rshCommand attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.rshCommand) self.failUnlessAssignRaises(ValueError, options, "rshCommand", "") self.failUnlessEqual(None, options.rshCommand) def testConstructor_038(self): """ Test assignment of cbackCommand attribute, None value. """ options = OptionsConfig(cbackCommand="command") self.failUnlessEqual("command", options.cbackCommand) options.cbackCommand = None self.failUnlessEqual(None, options.cbackCommand) def testConstructor_039(self): """ Test assignment of cbackCommand attribute, valid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.cbackCommand) options.cbackCommand = "command" self.failUnlessEqual("command", options.cbackCommand) def testConstructor_040(self): """ Test assignment of cbackCommand attribute, invalid value (empty). """ options = OptionsConfig() self.failUnlessEqual(None, options.cbackCommand) self.failUnlessAssignRaises(ValueError, options, "cbackCommand", "") self.failUnlessEqual(None, options.cbackCommand) def testConstructor_041(self): """ Test assignment of managedActions attribute, None value. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) options.managedActions = None self.failUnlessEqual(None, options.managedActions) def testConstructor_042(self): """ Test assignment of managedActions attribute, empty list. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) options.managedActions = [] self.failUnlessEqual([], options.managedActions) def testConstructor_043(self): """ Test assignment of managedActions attribute, non-empty list, valid values. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) options.managedActions = ['a', 'b', ] self.failUnlessEqual(['a', 'b'], options.managedActions) def testConstructor_044(self): """ Test assignment of managedActions attribute, non-empty list, invalid value. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["KEN", ]) self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["hello, world" ]) self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["dash-word", ]) self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["", ]) self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", [None, ]) self.failUnlessEqual(None, options.managedActions) def testConstructor_045(self): """ Test assignment of managedActions attribute, non-empty list, mixed values. """ options = OptionsConfig() self.failUnlessEqual(None, options.managedActions) self.failUnlessAssignRaises(ValueError, options, "managedActions", ["ken", "dash-word", ]) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ options1 = OptionsConfig() options2 = OptionsConfig() self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_003(self): """ Test comparison of two differing objects, startingDay differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(startingDay="monday") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_004(self): """ Test comparison of two differing objects, startingDay differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("tuesday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_005(self): """ Test comparison of two differing objects, workingDir differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(workingDir="/tmp") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_006(self): """ Test comparison of two differing objects, workingDir differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp/whatever", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_007(self): """ Test comparison of two differing objects, backupUser differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(backupUser="user") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_008(self): """ Test comparison of two differing objects, backupUser differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user2", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user1", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_009(self): """ Test comparison of two differing objects, backupGroup differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(backupGroup="group") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_010(self): """ Test comparison of two differing objects, backupGroup differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group1", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group2", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_011(self): """ Test comparison of two differing objects, rcpCommand differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(rcpCommand="command") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_012(self): """ Test comparison of two differing objects, rcpCommand differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -2 -B", overrides, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_013(self): """ Test comparison of two differing objects, overrides differs (one None, one empty). """ overrides1 = None overrides2 = [] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_014(self): """ Test comparison of two differing objects, overrides differs (one None, one not empty). """ overrides1 = None overrides2 = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2, "ssh") self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_015(self): """ Test comparison of two differing objects, overrides differs (one empty, one not empty). """ overrides1 = [ CommandOverride("one", "/one"), ] overrides2 = [] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_016(self): """ Test comparison of two differing objects, overrides differs (both not empty). """ overrides1 = [ CommandOverride("one", "/one"), ] overrides2 = [ CommandOverride(), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides1, hooks, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides2, hooks, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_017(self): """ Test comparison of two differing objects, hooks differs (one None, one empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = None hooks2 = [] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_018(self): """ Test comparison of two differing objects, hooks differs (one None, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = [ PreActionHook("collect", "ls -l ") ] hooks2 = [ PostActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 != options2) def testComparison_019(self): """ Test comparison of two differing objects, hooks differs (one empty, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = [ PreActionHook("collect", "ls -l ") ] hooks2 = [ PreActionHook("stage", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(options1 != options2) def testComparison_020(self): """ Test comparison of two differing objects, hooks differs (both not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks1 = [ PreActionHook("collect", "ls -l ") ] hooks2 = [ PostActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks1, "ssh", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks2, "ssh", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_021(self): """ Test comparison of two differing objects, rshCommand differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(rshCommand="command") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_022(self): """ Test comparison of two differing objects, rshCommand differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh2", "cback", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh1", "cback", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_023(self): """ Test comparison of two differing objects, cbackCommand differs (one None). """ options1 = OptionsConfig() options2 = OptionsConfig(rshCommand="command") self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_024(self): """ Test comparison of two differing objects, cbackCommand differs. """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions = [ "collect", "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback1", managedActions) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback2", managedActions) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_025(self): """ Test comparison of two differing objects, managedActions differs (one None, one empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = None managedActions2 = [] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_026(self): """ Test comparison of two differing objects, managedActions differs (one None, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = None managedActions2 = [ "collect", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(options1 != options2) def testComparison_027(self): """ Test comparison of two differing objects, managedActions differs (one empty, one not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = [] managedActions2 = [ "collect", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(options1 != options2) def testComparison_028(self): """ Test comparison of two differing objects, managedActions differs (both not empty). """ overrides = [ CommandOverride("one", "/one"), ] hooks = [ PreActionHook("collect", "ls -l ") ] managedActions1 = [ "collect", ] managedActions2 = [ "purge", ] options1 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions1) options2 = OptionsConfig("monday", "/tmp", "user", "group", "scp -1 -B", overrides, hooks, "ssh", "cback", managedActions2) self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) #################################### # Test add and replace of overrides #################################### def testOverrides_001(self): """ Test addOverride() with no existing overrides. """ options = OptionsConfig() options.addOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_002(self): """ Test addOverride() with no existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("one", "/one"), ] options.addOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("one", "/one"), CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_003(self): """ Test addOverride(), with existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/one"), ] options.addOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("cdrecord", "/one"), ], options.overrides) def testOverrides_004(self): """ Test replaceOverride() with no existing overrides. """ options = OptionsConfig() options.replaceOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_005(self): """ Test replaceOverride() with no existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("one", "/one"), ] options.replaceOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("one", "/one"), CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) def testOverrides_006(self): """ Test replaceOverride(), with existing override that matches. """ options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/one"), ] options.replaceOverride("cdrecord", "/usr/bin/wodim") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), ], options.overrides) ######################## # TestPeersConfig class ######################## class TestPeersConfig(unittest.TestCase): """Tests for the PeersConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PeersConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) self.failUnlessEqual(None, peers.remotePeers) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty lists). """ peers = PeersConfig([], []) self.failUnlessEqual([], peers.localPeers) self.failUnlessEqual([], peers.remotePeers) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty lists). """ peers = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failUnlessEqual([LocalPeer(), ], peers.localPeers) self.failUnlessEqual([RemotePeer(), ], peers.remotePeers) def testConstructor_004(self): """ Test assignment of localPeers attribute, None value. """ peers = PeersConfig(localPeers=[]) self.failUnlessEqual([], peers.localPeers) peers.localPeers = None self.failUnlessEqual(None, peers.localPeers) def testConstructor_005(self): """ Test assignment of localPeers attribute, empty list. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) peers.localPeers = [] self.failUnlessEqual([], peers.localPeers) def testConstructor_006(self): """ Test assignment of localPeers attribute, single valid entry. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) peers.localPeers = [LocalPeer(), ] self.failUnlessEqual([LocalPeer(), ], peers.localPeers) def testConstructor_007(self): """ Test assignment of localPeers attribute, multiple valid entries. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) peers.localPeers = [LocalPeer(name="one"), LocalPeer(name="two"), ] self.failUnlessEqual([LocalPeer(name="one"), LocalPeer(name="two"), ], peers.localPeers) def testConstructor_008(self): """ Test assignment of localPeers attribute, single invalid entry (None). """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) self.failUnlessAssignRaises(ValueError, peers, "localPeers", [None, ]) self.failUnlessEqual(None, peers.localPeers) def testConstructor_009(self): """ Test assignment of localPeers attribute, single invalid entry (not a LocalPeer). """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) self.failUnlessAssignRaises(ValueError, peers, "localPeers", [RemotePeer(), ]) self.failUnlessEqual(None, peers.localPeers) def testConstructor_010(self): """ Test assignment of localPeers attribute, mixed valid and invalid entries. """ peers = PeersConfig() self.failUnlessEqual(None, peers.localPeers) self.failUnlessAssignRaises(ValueError, peers, "localPeers", [LocalPeer(), RemotePeer(), ]) self.failUnlessEqual(None, peers.localPeers) def testConstructor_011(self): """ Test assignment of remotePeers attribute, None value. """ peers = PeersConfig(remotePeers=[]) self.failUnlessEqual([], peers.remotePeers) peers.remotePeers = None self.failUnlessEqual(None, peers.remotePeers) def testConstructor_012(self): """ Test assignment of remotePeers attribute, empty list. """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) peers.remotePeers = [] self.failUnlessEqual([], peers.remotePeers) def testConstructor_013(self): """ Test assignment of remotePeers attribute, single valid entry. """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) peers.remotePeers = [RemotePeer(name="one"), ] self.failUnlessEqual([RemotePeer(name="one"), ], peers.remotePeers) def testConstructor_014(self): """ Test assignment of remotePeers attribute, multiple valid entries. """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) peers.remotePeers = [RemotePeer(name="one"), RemotePeer(name="two"), ] self.failUnlessEqual([RemotePeer(name="one"), RemotePeer(name="two"), ], peers.remotePeers) def testConstructor_015(self): """ Test assignment of remotePeers attribute, single invalid entry (None). """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) self.failUnlessAssignRaises(ValueError, peers, "remotePeers", [None, ]) self.failUnlessEqual(None, peers.remotePeers) def testConstructor_016(self): """ Test assignment of remotePeers attribute, single invalid entry (not a RemotePeer). """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) self.failUnlessAssignRaises(ValueError, peers, "remotePeers", [LocalPeer(), ]) self.failUnlessEqual(None, peers.remotePeers) def testConstructor_017(self): """ Test assignment of remotePeers attribute, mixed valid and invalid entries. """ peers = PeersConfig() self.failUnlessEqual(None, peers.remotePeers) self.failUnlessAssignRaises(ValueError, peers, "remotePeers", [LocalPeer(), RemotePeer(), ]) self.failUnlessEqual(None, peers.remotePeers) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ peers1 = PeersConfig() peers2 = PeersConfig() self.failUnlessEqual(peers1, peers2) self.failUnless(peers1 == peers2) self.failUnless(not peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(peers1 >= peers2) self.failUnless(not peers1 != peers2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ peers1 = PeersConfig([], []) peers2 = PeersConfig([], []) self.failUnlessEqual(peers1, peers2) self.failUnless(peers1 == peers2) self.failUnless(not peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(peers1 >= peers2) self.failUnless(not peers1 != peers2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ peers1 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failUnlessEqual(peers1, peers2) self.failUnless(peers1 == peers2) self.failUnless(not peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(peers1 >= peers2) self.failUnless(not peers1 != peers2) def testComparison_004(self): """ Test comparison of two differing objects, localPeers differs (one None, one empty). """ peers1 = PeersConfig(None, [RemotePeer(), ]) peers2 = PeersConfig([], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_005(self): """ Test comparison of two differing objects, localPeers differs (one None, one not empty). """ peers1 = PeersConfig(None, [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_006(self): """ Test comparison of two differing objects, localPeers differs (one empty, one not empty). """ peers1 = PeersConfig([], [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_007(self): """ Test comparison of two differing objects, localPeers differs (both not empty). """ peers1 = PeersConfig([LocalPeer(name="one"), ], [RemotePeer(), ]) peers2 = PeersConfig([LocalPeer(name="two"), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_008(self): """ Test comparison of two differing objects, remotePeers differs (one None, one empty). """ peers1 = PeersConfig([LocalPeer(), ], None) peers2 = PeersConfig([LocalPeer(), ], []) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_009(self): """ Test comparison of two differing objects, remotePeers differs (one None, one not empty). """ peers1 = PeersConfig([LocalPeer(), ], None) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_010(self): """ Test comparison of two differing objects, remotePeers differs (one empty, one not empty). """ peers1 = PeersConfig([LocalPeer(), ], []) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(peers1 < peers2) self.failUnless(peers1 <= peers2) self.failUnless(not peers1 > peers2) self.failUnless(not peers1 >= peers2) self.failUnless(peers1 != peers2) def testComparison_011(self): """ Test comparison of two differing objects, remotePeers differs (both not empty). """ peers1 = PeersConfig([LocalPeer(), ], [RemotePeer(name="two"), ]) peers2 = PeersConfig([LocalPeer(), ], [RemotePeer(name="one"), ]) self.failIfEqual(peers1, peers2) self.failUnless(not peers1 == peers2) self.failUnless(not peers1 < peers2) self.failUnless(not peers1 <= peers2) self.failUnless(peers1 > peers2) self.failUnless(peers1 >= peers2) self.failUnless(peers1 != peers2) ########################## # TestCollectConfig class ########################## class TestCollectConfig(unittest.TestCase): """Tests for the CollectConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CollectConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ collect = CollectConfig() self.failUnlessEqual(None, collect.targetDir) self.failUnlessEqual(None, collect.collectMode) self.failUnlessEqual(None, collect.archiveMode) self.failUnlessEqual(None, collect.ignoreFile) self.failUnlessEqual(None, collect.absoluteExcludePaths) self.failUnlessEqual(None, collect.excludePatterns) self.failUnlessEqual(None, collect.collectDirs) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (lists empty). """ collect = CollectConfig("/target", "incr", "tar", "ignore", [], [], [], []) self.failUnlessEqual("/target", collect.targetDir) self.failUnlessEqual("incr", collect.collectMode) self.failUnlessEqual("tar", collect.archiveMode) self.failUnlessEqual("ignore", collect.ignoreFile) self.failUnlessEqual([], collect.absoluteExcludePaths) self.failUnlessEqual([], collect.excludePatterns) self.failUnlessEqual([], collect.collectDirs) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (lists not empty). """ collect = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failUnlessEqual("/target", collect.targetDir) self.failUnlessEqual("incr", collect.collectMode) self.failUnlessEqual("tar", collect.archiveMode) self.failUnlessEqual("ignore", collect.ignoreFile) self.failUnlessEqual(["/path", ], collect.absoluteExcludePaths) self.failUnlessEqual(["pattern", ], collect.excludePatterns) self.failUnlessEqual([CollectFile(), ], collect.collectFiles) self.failUnlessEqual([CollectDir(), ], collect.collectDirs) def testConstructor_004(self): """ Test assignment of targetDir attribute, None value. """ collect = CollectConfig(targetDir="/whatever") self.failUnlessEqual("/whatever", collect.targetDir) collect.targetDir = None self.failUnlessEqual(None, collect.targetDir) def testConstructor_005(self): """ Test assignment of targetDir attribute, valid value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.targetDir) collect.targetDir = "/whatever" self.failUnlessEqual("/whatever", collect.targetDir) def testConstructor_006(self): """ Test assignment of targetDir attribute, invalid value (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.targetDir) self.failUnlessAssignRaises(ValueError, collect, "targetDir", "") self.failUnlessEqual(None, collect.targetDir) def testConstructor_007(self): """ Test assignment of targetDir attribute, invalid value (non-absolute). """ collect = CollectConfig() self.failUnlessEqual(None, collect.targetDir) self.failUnlessAssignRaises(ValueError, collect, "targetDir", "bogus") self.failUnlessEqual(None, collect.targetDir) def testConstructor_008(self): """ Test assignment of collectMode attribute, None value. """ collect = CollectConfig(collectMode="incr") self.failUnlessEqual("incr", collect.collectMode) collect.collectMode = None self.failUnlessEqual(None, collect.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, valid value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectMode) collect.collectMode = "daily" self.failUnlessEqual("daily", collect.collectMode) collect.collectMode = "weekly" self.failUnlessEqual("weekly", collect.collectMode) collect.collectMode = "incr" self.failUnlessEqual("incr", collect.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectMode) self.failUnlessAssignRaises(ValueError, collect, "collectMode", "") self.failUnlessEqual(None, collect.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectMode) self.failUnlessAssignRaises(ValueError, collect, "collectMode", "periodic") self.failUnlessEqual(None, collect.collectMode) def testConstructor_012(self): """ Test assignment of archiveMode attribute, None value. """ collect = CollectConfig(archiveMode="tar") self.failUnlessEqual("tar", collect.archiveMode) collect.archiveMode = None self.failUnlessEqual(None, collect.archiveMode) def testConstructor_013(self): """ Test assignment of archiveMode attribute, valid value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.archiveMode) collect.archiveMode = "tar" self.failUnlessEqual("tar", collect.archiveMode) collect.archiveMode = "targz" self.failUnlessEqual("targz", collect.archiveMode) collect.archiveMode = "tarbz2" self.failUnlessEqual("tarbz2", collect.archiveMode) def testConstructor_014(self): """ Test assignment of archiveMode attribute, invalid value (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.archiveMode) self.failUnlessAssignRaises(ValueError, collect, "archiveMode", "") self.failUnlessEqual(None, collect.archiveMode) def testConstructor_015(self): """ Test assignment of archiveMode attribute, invalid value (not in list). """ collect = CollectConfig() self.failUnlessEqual(None, collect.archiveMode) self.failUnlessAssignRaises(ValueError, collect, "archiveMode", "tarz") self.failUnlessEqual(None, collect.archiveMode) def testConstructor_016(self): """ Test assignment of ignoreFile attribute, None value. """ collect = CollectConfig(ignoreFile="ignore") self.failUnlessEqual("ignore", collect.ignoreFile) collect.ignoreFile = None self.failUnlessEqual(None, collect.ignoreFile) def testConstructor_017(self): """ Test assignment of ignoreFile attribute, valid value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.ignoreFile) collect.ignoreFile = "ignore" self.failUnlessEqual("ignore", collect.ignoreFile) def testConstructor_018(self): """ Test assignment of ignoreFile attribute, invalid value (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.ignoreFile) self.failUnlessAssignRaises(ValueError, collect, "ignoreFile", "") self.failUnlessEqual(None, collect.ignoreFile) def testConstructor_019(self): """ Test assignment of absoluteExcludePaths attribute, None value. """ collect = CollectConfig(absoluteExcludePaths=[]) self.failUnlessEqual([], collect.absoluteExcludePaths) collect.absoluteExcludePaths = None self.failUnlessEqual(None, collect.absoluteExcludePaths) def testConstructor_020(self): """ Test assignment of absoluteExcludePaths attribute, [] value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) collect.absoluteExcludePaths = [] self.failUnlessEqual([], collect.absoluteExcludePaths) def testConstructor_021(self): """ Test assignment of absoluteExcludePaths attribute, single valid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) collect.absoluteExcludePaths = ["/whatever", ] self.failUnlessEqual(["/whatever", ], collect.absoluteExcludePaths) def testConstructor_022(self): """ Test assignment of absoluteExcludePaths attribute, multiple valid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) collect.absoluteExcludePaths = ["/one", "/two", "/three", ] self.failUnlessEqual(["/one", "/two", "/three", ], collect.absoluteExcludePaths) def testConstructor_023(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (empty). """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collect, "absoluteExcludePaths", [ "", ]) self.failUnlessEqual(None, collect.absoluteExcludePaths) def testConstructor_024(self): """ Test assignment of absoluteExcludePaths attribute, single invalid entry (not absolute). """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collect, "absoluteExcludePaths", [ "one", ]) self.failUnlessEqual(None, collect.absoluteExcludePaths) def testConstructor_025(self): """ Test assignment of absoluteExcludePaths attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.absoluteExcludePaths) self.failUnlessAssignRaises(ValueError, collect, "absoluteExcludePaths", [ "one", "/two", ]) self.failUnlessEqual(None, collect.absoluteExcludePaths) def testConstructor_026(self): """ Test assignment of excludePatterns attribute, None value. """ collect = CollectConfig(excludePatterns=[]) self.failUnlessEqual([], collect.excludePatterns) collect.excludePatterns = None self.failUnlessEqual(None, collect.excludePatterns) def testConstructor_027(self): """ Test assignment of excludePatterns attribute, [] value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) collect.excludePatterns = [] self.failUnlessEqual([], collect.excludePatterns) def testConstructor_028(self): """ Test assignment of excludePatterns attribute, single valid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) collect.excludePatterns = ["pattern", ] self.failUnlessEqual(["pattern", ], collect.excludePatterns) def testConstructor_029(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) collect.excludePatterns = ["pattern1", "pattern2", ] self.failUnlessEqual(["pattern1", "pattern2", ], collect.excludePatterns) def testConstructor_029a(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) self.failUnlessAssignRaises(ValueError, collect, "excludePatterns", ["*.jpg", ]) self.failUnlessEqual(None, collect.excludePatterns) def testConstructor_029b(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) self.failUnlessAssignRaises(ValueError, collect, "excludePatterns", ["*.jpg", "*", ]) self.failUnlessEqual(None, collect.excludePatterns) def testConstructor_029c(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.excludePatterns) self.failUnlessAssignRaises(ValueError, collect, "excludePatterns", ["*.jpg", "valid", ]) self.failUnlessEqual(None, collect.excludePatterns) def testConstructor_030(self): """ Test assignment of collectDirs attribute, None value. """ collect = CollectConfig(collectDirs=[]) self.failUnlessEqual([], collect.collectDirs) collect.collectDirs = None self.failUnlessEqual(None, collect.collectDirs) def testConstructor_031(self): """ Test assignment of collectDirs attribute, [] value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) collect.collectDirs = [] self.failUnlessEqual([], collect.collectDirs) def testConstructor_032(self): """ Test assignment of collectDirs attribute, single valid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) collect.collectDirs = [CollectDir(absolutePath="/one"), ] self.failUnlessEqual([CollectDir(absolutePath="/one"), ], collect.collectDirs) def testConstructor_033(self): """ Test assignment of collectDirs attribute, multiple valid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) collect.collectDirs = [CollectDir(absolutePath="/one"), CollectDir(absolutePath="/two"), ] self.failUnlessEqual([CollectDir(absolutePath="/one"), CollectDir(absolutePath="/two"), ], collect.collectDirs) def testConstructor_034(self): """ Test assignment of collectDirs attribute, single invalid entry (None). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) self.failUnlessAssignRaises(ValueError, collect, "collectDirs", [ None, ]) self.failUnlessEqual(None, collect.collectDirs) def testConstructor_035(self): """ Test assignment of collectDirs attribute, single invalid entry (not a CollectDir). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) self.failUnlessAssignRaises(ValueError, collect, "collectDirs", [ "hello", ]) self.failUnlessEqual(None, collect.collectDirs) def testConstructor_036(self): """ Test assignment of collectDirs attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectDirs) self.failUnlessAssignRaises(ValueError, collect, "collectDirs", [ "hello", CollectDir(), ]) self.failUnlessEqual(None, collect.collectDirs) def testConstructor_037(self): """ Test assignment of collectFiles attribute, None value. """ collect = CollectConfig(collectFiles=[]) self.failUnlessEqual([], collect.collectFiles) collect.collectFiles = None self.failUnlessEqual(None, collect.collectFiles) def testConstructor_038(self): """ Test assignment of collectFiles attribute, [] value. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) collect.collectFiles = [] self.failUnlessEqual([], collect.collectFiles) def testConstructor_039(self): """ Test assignment of collectFiles attribute, single valid entry. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) collect.collectFiles = [CollectFile(absolutePath="/one"), ] self.failUnlessEqual([CollectFile(absolutePath="/one"), ], collect.collectFiles) def testConstructor_040(self): """ Test assignment of collectFiles attribute, multiple valid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) collect.collectFiles = [CollectFile(absolutePath="/one"), CollectFile(absolutePath="/two"), ] self.failUnlessEqual([CollectFile(absolutePath="/one"), CollectFile(absolutePath="/two"), ], collect.collectFiles) def testConstructor_041(self): """ Test assignment of collectFiles attribute, single invalid entry (None). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) self.failUnlessAssignRaises(ValueError, collect, "collectFiles", [ None, ]) self.failUnlessEqual(None, collect.collectFiles) def testConstructor_042(self): """ Test assignment of collectFiles attribute, single invalid entry (not a CollectFile). """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) self.failUnlessAssignRaises(ValueError, collect, "collectFiles", [ "hello", ]) self.failUnlessEqual(None, collect.collectFiles) def testConstructor_043(self): """ Test assignment of collectFiles attribute, mixed valid and invalid entries. """ collect = CollectConfig() self.failUnlessEqual(None, collect.collectFiles) self.failUnlessAssignRaises(ValueError, collect, "collectFiles", [ "hello", CollectFile(), ]) self.failUnlessEqual(None, collect.collectFiles) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ collect1 = CollectConfig() collect2 = CollectConfig() self.failUnlessEqual(collect1, collect2) self.failUnless(collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(not collect1 != collect2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failUnlessEqual(collect1, collect2) self.failUnless(collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(not collect1 != collect2) def testComparison_003(self): """ Test comparison of two differing objects, targetDir differs (one None). """ collect1 = CollectConfig(None, "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target2", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_004(self): """ Test comparison of two differing objects, targetDir differs. """ collect1 = CollectConfig("/target1", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target2", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", None, "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ collect1 = CollectConfig("/target", "daily", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_007(self): """ Test comparison of two differing objects, archiveMode differs (one None). """ collect1 = CollectConfig("/target", "incr", None, "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_008(self): """ Test comparison of two differing objects, archiveMode differs. """ collect1 = CollectConfig("/target", "incr", "targz", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tarbz2", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_009(self): """ Test comparison of two differing objects, ignoreFile differs (one None). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", None, ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_010(self): """ Test comparison of two differing objects, ignoreFile differs. """ collect1 = CollectConfig("/target", "incr", "tar", "ignore1", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore2", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_011(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", None, ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", [], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_012(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", None, ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_013(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", [], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_014(self): """ Test comparison of two differing objects, absoluteExcludePaths differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", "/path2", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_015(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], None, [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], [], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_016(self): """ Test comparison of two differing objects, excludePatterns differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], None, [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_017(self): """ Test comparison of two differing objects, excludePatterns differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], [], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_018(self): """ Test comparison of two differing objects, excludePatterns differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", "bogus", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_019(self): """ Test comparison of two differing objects, collectDirs differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], None) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], []) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_020(self): """ Test comparison of two differing objects, collectDirs differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], None) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_021(self): """ Test comparison of two differing objects, collectDirs differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], []) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_022(self): """ Test comparison of two differing objects, collectDirs differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_023(self): """ Test comparison of two differing objects, collectFiles differs (one None, one empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], None, [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_024(self): """ Test comparison of two differing objects, collectFiles differs (one None, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], None, [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(collect1 < collect2) self.failUnless(collect1 <= collect2) self.failUnless(not collect1 > collect2) self.failUnless(not collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_025(self): """ Test comparison of two differing objects, collectFiles differs (one empty, one not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) def testComparison_026(self): """ Test comparison of two differing objects, collectFiles differs (both not empty). """ collect1 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), CollectFile(), ], [CollectDir() ]) collect2 = CollectConfig("/target", "incr", "tar", "ignore", ["/path", ], ["pattern", ], [CollectFile(), ], [CollectDir(), ]) self.failIfEqual(collect1, collect2) self.failUnless(not collect1 == collect2) self.failUnless(not collect1 < collect2) self.failUnless(not collect1 <= collect2) self.failUnless(collect1 > collect2) self.failUnless(collect1 >= collect2) self.failUnless(collect1 != collect2) ######################## # TestStageConfig class ######################## class TestStageConfig(unittest.TestCase): """Tests for the StageConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = StageConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ stage = StageConfig() self.failUnlessEqual(None, stage.targetDir) self.failUnlessEqual(None, stage.localPeers) self.failUnlessEqual(None, stage.remotePeers) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty lists). """ stage = StageConfig("/whatever", [], []) self.failUnlessEqual("/whatever", stage.targetDir) self.failUnlessEqual([], stage.localPeers) self.failUnlessEqual([], stage.remotePeers) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty lists). """ stage = StageConfig("/whatever", [LocalPeer(), ], [RemotePeer(), ]) self.failUnlessEqual("/whatever", stage.targetDir) self.failUnlessEqual([LocalPeer(), ], stage.localPeers) self.failUnlessEqual([RemotePeer(), ], stage.remotePeers) def testConstructor_004(self): """ Test assignment of targetDir attribute, None value. """ stage = StageConfig(targetDir="/whatever") self.failUnlessEqual("/whatever", stage.targetDir) stage.targetDir = None self.failUnlessEqual(None, stage.targetDir) def testConstructor_005(self): """ Test assignment of targetDir attribute, valid value. """ stage = StageConfig() self.failUnlessEqual(None, stage.targetDir) stage.targetDir = "/whatever" self.failUnlessEqual("/whatever", stage.targetDir) def testConstructor_006(self): """ Test assignment of targetDir attribute, invalid value (empty). """ stage = StageConfig() self.failUnlessEqual(None, stage.targetDir) self.failUnlessAssignRaises(ValueError, stage, "targetDir", "") self.failUnlessEqual(None, stage.targetDir) def testConstructor_007(self): """ Test assignment of targetDir attribute, invalid value (non-absolute). """ stage = StageConfig() self.failUnlessEqual(None, stage.targetDir) self.failUnlessAssignRaises(ValueError, stage, "targetDir", "stuff") self.failUnlessEqual(None, stage.targetDir) def testConstructor_008(self): """ Test assignment of localPeers attribute, None value. """ stage = StageConfig(localPeers=[]) self.failUnlessEqual([], stage.localPeers) stage.localPeers = None self.failUnlessEqual(None, stage.localPeers) def testConstructor_009(self): """ Test assignment of localPeers attribute, empty list. """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) stage.localPeers = [] self.failUnlessEqual([], stage.localPeers) def testConstructor_010(self): """ Test assignment of localPeers attribute, single valid entry. """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) stage.localPeers = [LocalPeer(), ] self.failUnlessEqual([LocalPeer(), ], stage.localPeers) def testConstructor_011(self): """ Test assignment of localPeers attribute, multiple valid entries. """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) stage.localPeers = [LocalPeer(name="one"), LocalPeer(name="two"), ] self.failUnlessEqual([LocalPeer(name="one"), LocalPeer(name="two"), ], stage.localPeers) def testConstructor_012(self): """ Test assignment of localPeers attribute, single invalid entry (None). """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) self.failUnlessAssignRaises(ValueError, stage, "localPeers", [None, ]) self.failUnlessEqual(None, stage.localPeers) def testConstructor_013(self): """ Test assignment of localPeers attribute, single invalid entry (not a LocalPeer). """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) self.failUnlessAssignRaises(ValueError, stage, "localPeers", [RemotePeer(), ]) self.failUnlessEqual(None, stage.localPeers) def testConstructor_014(self): """ Test assignment of localPeers attribute, mixed valid and invalid entries. """ stage = StageConfig() self.failUnlessEqual(None, stage.localPeers) self.failUnlessAssignRaises(ValueError, stage, "localPeers", [LocalPeer(), RemotePeer(), ]) self.failUnlessEqual(None, stage.localPeers) def testConstructor_015(self): """ Test assignment of remotePeers attribute, None value. """ stage = StageConfig(remotePeers=[]) self.failUnlessEqual([], stage.remotePeers) stage.remotePeers = None self.failUnlessEqual(None, stage.remotePeers) def testConstructor_016(self): """ Test assignment of remotePeers attribute, empty list. """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) stage.remotePeers = [] self.failUnlessEqual([], stage.remotePeers) def testConstructor_017(self): """ Test assignment of remotePeers attribute, single valid entry. """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) stage.remotePeers = [RemotePeer(name="one"), ] self.failUnlessEqual([RemotePeer(name="one"), ], stage.remotePeers) def testConstructor_018(self): """ Test assignment of remotePeers attribute, multiple valid entries. """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) stage.remotePeers = [RemotePeer(name="one"), RemotePeer(name="two"), ] self.failUnlessEqual([RemotePeer(name="one"), RemotePeer(name="two"), ], stage.remotePeers) def testConstructor_019(self): """ Test assignment of remotePeers attribute, single invalid entry (None). """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) self.failUnlessAssignRaises(ValueError, stage, "remotePeers", [None, ]) self.failUnlessEqual(None, stage.remotePeers) def testConstructor_020(self): """ Test assignment of remotePeers attribute, single invalid entry (not a RemotePeer). """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) self.failUnlessAssignRaises(ValueError, stage, "remotePeers", [LocalPeer(), ]) self.failUnlessEqual(None, stage.remotePeers) def testConstructor_021(self): """ Test assignment of remotePeers attribute, mixed valid and invalid entries. """ stage = StageConfig() self.failUnlessEqual(None, stage.remotePeers) self.failUnlessAssignRaises(ValueError, stage, "remotePeers", [LocalPeer(), RemotePeer(), ]) self.failUnlessEqual(None, stage.remotePeers) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ stage1 = StageConfig() stage2 = StageConfig() self.failUnlessEqual(stage1, stage2) self.failUnless(stage1 == stage2) self.failUnless(not stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(stage1 >= stage2) self.failUnless(not stage1 != stage2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ stage1 = StageConfig("/target", [], []) stage2 = StageConfig("/target", [], []) self.failUnlessEqual(stage1, stage2) self.failUnless(stage1 == stage2) self.failUnless(not stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(stage1 >= stage2) self.failUnless(not stage1 != stage2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ stage1 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failUnlessEqual(stage1, stage2) self.failUnless(stage1 == stage2) self.failUnless(not stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(stage1 >= stage2) self.failUnless(not stage1 != stage2) def testComparison_004(self): """ Test comparison of two differing objects, targetDir differs (one None). """ stage1 = StageConfig() stage2 = StageConfig(targetDir="/whatever") self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_005(self): """ Test comparison of two differing objects, targetDir differs. """ stage1 = StageConfig("/target1", [LocalPeer(), ], [RemotePeer(), ]) stage2 = StageConfig("/target2", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_006(self): """ Test comparison of two differing objects, localPeers differs (one None, one empty). """ stage1 = StageConfig("/target", None, [RemotePeer(), ]) stage2 = StageConfig("/target", [], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_007(self): """ Test comparison of two differing objects, localPeers differs (one None, one not empty). """ stage1 = StageConfig("/target", None, [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_008(self): """ Test comparison of two differing objects, localPeers differs (one empty, one not empty). """ stage1 = StageConfig("/target", [], [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_009(self): """ Test comparison of two differing objects, localPeers differs (both not empty). """ stage1 = StageConfig("/target", [LocalPeer(name="one"), ], [RemotePeer(), ]) stage2 = StageConfig("/target", [LocalPeer(name="two"), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_010(self): """ Test comparison of two differing objects, remotePeers differs (one None, one empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], None) stage2 = StageConfig("/target", [LocalPeer(), ], []) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_011(self): """ Test comparison of two differing objects, remotePeers differs (one None, one not empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], None) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_012(self): """ Test comparison of two differing objects, remotePeers differs (one empty, one not empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], []) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(stage1 < stage2) self.failUnless(stage1 <= stage2) self.failUnless(not stage1 > stage2) self.failUnless(not stage1 >= stage2) self.failUnless(stage1 != stage2) def testComparison_013(self): """ Test comparison of two differing objects, remotePeers differs (both not empty). """ stage1 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(name="two"), ]) stage2 = StageConfig("/target", [LocalPeer(), ], [RemotePeer(name="one"), ]) self.failIfEqual(stage1, stage2) self.failUnless(not stage1 == stage2) self.failUnless(not stage1 < stage2) self.failUnless(not stage1 <= stage2) self.failUnless(stage1 > stage2) self.failUnless(stage1 >= stage2) self.failUnless(stage1 != stage2) ######################## # TestStoreConfig class ######################## class TestStoreConfig(unittest.TestCase): """Tests for the StoreConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = StoreConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ store = StoreConfig() self.failUnlessEqual(None, store.sourceDir) self.failUnlessEqual(None, store.mediaType) self.failUnlessEqual(None, store.deviceType) self.failUnlessEqual(None, store.devicePath) self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessEqual(None, store.driveSpeed) self.failUnlessEqual(False, store.checkData) self.failUnlessEqual(False, store.checkMedia) self.failUnlessEqual(False, store.warnMidnite) self.failUnlessEqual(False, store.noEject) self.failUnlessEqual(None, store.blankBehavior) self.failUnlessEqual(None, store.refreshMediaDelay) self.failUnlessEqual(None, store.ejectDelay) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ behavior = BlankBehavior("weekly", "1.3") store = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior, 12, 13) self.failUnlessEqual("/source", store.sourceDir) self.failUnlessEqual("cdr-74", store.mediaType) self.failUnlessEqual("cdwriter", store.deviceType) self.failUnlessEqual("/dev/cdrw", store.devicePath) self.failUnlessEqual("0,0,0", store.deviceScsiId) self.failUnlessEqual(4, store.driveSpeed) self.failUnlessEqual(True, store.checkData) self.failUnlessEqual(True, store.checkMedia) self.failUnlessEqual(True, store.warnMidnite) self.failUnlessEqual(True, store.noEject) self.failUnlessEqual(behavior, store.blankBehavior) self.failUnlessEqual(12, store.refreshMediaDelay) self.failUnlessEqual(13, store.ejectDelay) def testConstructor_003(self): """ Test assignment of sourceDir attribute, None value. """ store = StoreConfig(sourceDir="/whatever") self.failUnlessEqual("/whatever", store.sourceDir) store.sourceDir = None self.failUnlessEqual(None, store.sourceDir) def testConstructor_004(self): """ Test assignment of sourceDir attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.sourceDir) store.sourceDir = "/whatever" self.failUnlessEqual("/whatever", store.sourceDir) def testConstructor_005(self): """ Test assignment of sourceDir attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.sourceDir) self.failUnlessAssignRaises(ValueError, store, "sourceDir", "") self.failUnlessEqual(None, store.sourceDir) def testConstructor_006(self): """ Test assignment of sourceDir attribute, invalid value (non-absolute). """ store = StoreConfig() self.failUnlessEqual(None, store.sourceDir) self.failUnlessAssignRaises(ValueError, store, "sourceDir", "bogus") self.failUnlessEqual(None, store.sourceDir) def testConstructor_007(self): """ Test assignment of mediaType attribute, None value. """ store = StoreConfig(mediaType="cdr-74") self.failUnlessEqual("cdr-74", store.mediaType) store.mediaType = None self.failUnlessEqual(None, store.mediaType) def testConstructor_008(self): """ Test assignment of mediaType attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.mediaType) store.mediaType = "cdr-74" self.failUnlessEqual("cdr-74", store.mediaType) store.mediaType = "cdrw-74" self.failUnlessEqual("cdrw-74", store.mediaType) store.mediaType = "cdr-80" self.failUnlessEqual("cdr-80", store.mediaType) store.mediaType = "cdrw-80" self.failUnlessEqual("cdrw-80", store.mediaType) store.mediaType = "dvd+r" self.failUnlessEqual("dvd+r", store.mediaType) store.mediaType = "dvd+rw" self.failUnlessEqual("dvd+rw", store.mediaType) def testConstructor_009(self): """ Test assignment of mediaType attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.mediaType) self.failUnlessAssignRaises(ValueError, store, "mediaType", "") self.failUnlessEqual(None, store.mediaType) def testConstructor_010(self): """ Test assignment of mediaType attribute, invalid value (not in list). """ store = StoreConfig() self.failUnlessEqual(None, store.mediaType) self.failUnlessAssignRaises(ValueError, store, "mediaType", "floppy") self.failUnlessEqual(None, store.mediaType) def testConstructor_011(self): """ Test assignment of deviceType attribute, None value. """ store = StoreConfig(deviceType="cdwriter") self.failUnlessEqual("cdwriter", store.deviceType) store.deviceType = None self.failUnlessEqual(None, store.deviceType) def testConstructor_012(self): """ Test assignment of deviceType attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.deviceType) store.deviceType = "cdwriter" self.failUnlessEqual("cdwriter", store.deviceType) store.deviceType = "dvdwriter" self.failUnlessEqual("dvdwriter", store.deviceType) def testConstructor_013(self): """ Test assignment of deviceType attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.deviceType) self.failUnlessAssignRaises(ValueError, store, "deviceType", "") self.failUnlessEqual(None, store.deviceType) def testConstructor_014(self): """ Test assignment of deviceType attribute, invalid value (not in list). """ store = StoreConfig() self.failUnlessEqual(None, store.deviceType) self.failUnlessAssignRaises(ValueError, store, "deviceType", "ftape") self.failUnlessEqual(None, store.deviceType) def testConstructor_015(self): """ Test assignment of devicePath attribute, None value. """ store = StoreConfig(devicePath="/dev/cdrw") self.failUnlessEqual("/dev/cdrw", store.devicePath) store.devicePath = None self.failUnlessEqual(None, store.devicePath) def testConstructor_016(self): """ Test assignment of devicePath attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.devicePath) store.devicePath = "/dev/cdrw" self.failUnlessEqual("/dev/cdrw", store.devicePath) def testConstructor_017(self): """ Test assignment of devicePath attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.devicePath) self.failUnlessAssignRaises(ValueError, store, "devicePath", "") self.failUnlessEqual(None, store.devicePath) def testConstructor_018(self): """ Test assignment of devicePath attribute, invalid value (non-absolute). """ store = StoreConfig() self.failUnlessEqual(None, store.devicePath) self.failUnlessAssignRaises(ValueError, store, "devicePath", "dev/cdrw") self.failUnlessEqual(None, store.devicePath) def testConstructor_019(self): """ Test assignment of deviceScsiId attribute, None value. """ store = StoreConfig(deviceScsiId="0,0,0") self.failUnlessEqual("0,0,0", store.deviceScsiId) store.deviceScsiId = None self.failUnlessEqual(None, store.deviceScsiId) def testConstructor_020(self): """ Test assignment of deviceScsiId attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.deviceScsiId) store.deviceScsiId = "0,0,0" self.failUnlessEqual("0,0,0", store.deviceScsiId) store.deviceScsiId = "ATA:0,0,0" self.failUnlessEqual("ATA:0,0,0", store.deviceScsiId) def testConstructor_021(self): """ Test assignment of deviceScsiId attribute, invalid value (empty). """ store = StoreConfig() self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "") self.failUnlessEqual(None, store.deviceScsiId) def testConstructor_022(self): """ Test assignment of deviceScsiId attribute, invalid value (invalid id). """ store = StoreConfig() self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "ATA;0,0,0") self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "ATAPI-0,0,0") self.failUnlessEqual(None, store.deviceScsiId) self.failUnlessAssignRaises(ValueError, store, "deviceScsiId", "1:2:3") self.failUnlessEqual(None, store.deviceScsiId) def testConstructor_023(self): """ Test assignment of driveSpeed attribute, None value. """ store = StoreConfig(driveSpeed=4) self.failUnlessEqual(4, store.driveSpeed) store.driveSpeed = None self.failUnlessEqual(None, store.driveSpeed) #pylint: disable=R0204 def testConstructor_024(self): """ Test assignment of driveSpeed attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.driveSpeed) store.driveSpeed = 4 self.failUnlessEqual(4, store.driveSpeed) store.driveSpeed = "12" self.failUnlessEqual(12, store.driveSpeed) def testConstructor_025(self): """ Test assignment of driveSpeed attribute, invalid value (not an integer). """ store = StoreConfig() self.failUnlessEqual(None, store.driveSpeed) self.failUnlessAssignRaises(ValueError, store, "driveSpeed", "blech") self.failUnlessEqual(None, store.driveSpeed) self.failUnlessAssignRaises(ValueError, store, "driveSpeed", CollectDir()) self.failUnlessEqual(None, store.driveSpeed) def testConstructor_026(self): """ Test assignment of checkData attribute, None value. """ store = StoreConfig(checkData=True) self.failUnlessEqual(True, store.checkData) store.checkData = None self.failUnlessEqual(False, store.checkData) def testConstructor_027(self): """ Test assignment of checkData attribute, valid value (real boolean). """ store = StoreConfig() self.failUnlessEqual(False, store.checkData) store.checkData = True self.failUnlessEqual(True, store.checkData) store.checkData = False self.failUnlessEqual(False, store.checkData) #pylint: disable=R0204 def testConstructor_028(self): """ Test assignment of checkData attribute, valid value (expression). """ store = StoreConfig() self.failUnlessEqual(False, store.checkData) store.checkData = 0 self.failUnlessEqual(False, store.checkData) store.checkData = [] self.failUnlessEqual(False, store.checkData) store.checkData = None self.failUnlessEqual(False, store.checkData) store.checkData = ['a'] self.failUnlessEqual(True, store.checkData) store.checkData = 3 self.failUnlessEqual(True, store.checkData) def testConstructor_029(self): """ Test assignment of warnMidnite attribute, None value. """ store = StoreConfig(warnMidnite=True) self.failUnlessEqual(True, store.warnMidnite) store.warnMidnite = None self.failUnlessEqual(False, store.warnMidnite) def testConstructor_030(self): """ Test assignment of warnMidnite attribute, valid value (real boolean). """ store = StoreConfig() self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = True self.failUnlessEqual(True, store.warnMidnite) store.warnMidnite = False self.failUnlessEqual(False, store.warnMidnite) #pylint: disable=R0204 def testConstructor_031(self): """ Test assignment of warnMidnite attribute, valid value (expression). """ store = StoreConfig() self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = 0 self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = [] self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = None self.failUnlessEqual(False, store.warnMidnite) store.warnMidnite = ['a'] self.failUnlessEqual(True, store.warnMidnite) store.warnMidnite = 3 self.failUnlessEqual(True, store.warnMidnite) def testConstructor_032(self): """ Test assignment of noEject attribute, None value. """ store = StoreConfig(noEject=True) self.failUnlessEqual(True, store.noEject) store.noEject = None self.failUnlessEqual(False, store.noEject) def testConstructor_033(self): """ Test assignment of noEject attribute, valid value (real boolean). """ store = StoreConfig() self.failUnlessEqual(False, store.noEject) store.noEject = True self.failUnlessEqual(True, store.noEject) store.noEject = False self.failUnlessEqual(False, store.noEject) #pylint: disable=R0204 def testConstructor_034(self): """ Test assignment of noEject attribute, valid value (expression). """ store = StoreConfig() self.failUnlessEqual(False, store.noEject) store.noEject = 0 self.failUnlessEqual(False, store.noEject) store.noEject = [] self.failUnlessEqual(False, store.noEject) store.noEject = None self.failUnlessEqual(False, store.noEject) store.noEject = ['a'] self.failUnlessEqual(True, store.noEject) store.noEject = 3 self.failUnlessEqual(True, store.noEject) def testConstructor_035(self): """ Test assignment of checkMedia attribute, None value. """ store = StoreConfig(checkMedia=True) self.failUnlessEqual(True, store.checkMedia) store.checkMedia = None self.failUnlessEqual(False, store.checkMedia) def testConstructor_036(self): """ Test assignment of checkMedia attribute, valid value (real boolean). """ store = StoreConfig() self.failUnlessEqual(False, store.checkMedia) store.checkMedia = True self.failUnlessEqual(True, store.checkMedia) store.checkMedia = False self.failUnlessEqual(False, store.checkMedia) #pylint: disable=R0204 def testConstructor_037(self): """ Test assignment of checkMedia attribute, valid value (expression). """ store = StoreConfig() self.failUnlessEqual(False, store.checkMedia) store.checkMedia = 0 self.failUnlessEqual(False, store.checkMedia) store.checkMedia = [] self.failUnlessEqual(False, store.checkMedia) store.checkMedia = None self.failUnlessEqual(False, store.checkMedia) store.checkMedia = ['a'] self.failUnlessEqual(True, store.checkMedia) store.checkMedia = 3 self.failUnlessEqual(True, store.checkMedia) def testConstructor_038(self): """ Test assignment of blankBehavior attribute, None value. """ store = StoreConfig() store.blankBehavior = None self.failUnlessEqual(None, store.blankBehavior) def testConstructor_039(self): """ Test assignment of blankBehavior store attribute, valid value. """ store = StoreConfig() store.blankBehavior = BlankBehavior() self.failUnlessEqual(BlankBehavior(), store.blankBehavior) def testConstructor_040(self): """ Test assignment of blankBehavior store attribute, invalid value (not BlankBehavior). """ store = StoreConfig() self.failUnlessAssignRaises(ValueError, store, "blankBehavior", CollectDir()) def testConstructor_041(self): """ Test assignment of refreshMediaDelay attribute, None value. """ store = StoreConfig(refreshMediaDelay=4) self.failUnlessEqual(4, store.refreshMediaDelay) store.refreshMediaDelay = None self.failUnlessEqual(None, store.refreshMediaDelay) #pylint: disable=R0204 def testConstructor_042(self): """ Test assignment of refreshMediaDelay attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.refreshMediaDelay) store.refreshMediaDelay = 4 self.failUnlessEqual(4, store.refreshMediaDelay) store.refreshMediaDelay = "12" self.failUnlessEqual(12, store.refreshMediaDelay) store.refreshMediaDelay = "0" self.failUnlessEqual(None, store.refreshMediaDelay) store.refreshMediaDelay = 0 self.failUnlessEqual(None, store.refreshMediaDelay) def testConstructor_043(self): """ Test assignment of refreshMediaDelay attribute, invalid value (not an integer). """ store = StoreConfig() self.failUnlessEqual(None, store.refreshMediaDelay) self.failUnlessAssignRaises(ValueError, store, "refreshMediaDelay", "blech") self.failUnlessEqual(None, store.refreshMediaDelay) self.failUnlessAssignRaises(ValueError, store, "refreshMediaDelay", CollectDir()) self.failUnlessEqual(None, store.refreshMediaDelay) def testConstructor_044(self): """ Test assignment of ejectDelay attribute, None value. """ store = StoreConfig(ejectDelay=4) self.failUnlessEqual(4, store.ejectDelay) store.ejectDelay = None self.failUnlessEqual(None, store.ejectDelay) #pylint: disable=R0204 def testConstructor_045(self): """ Test assignment of ejectDelay attribute, valid value. """ store = StoreConfig() self.failUnlessEqual(None, store.ejectDelay) store.ejectDelay = 4 self.failUnlessEqual(4, store.ejectDelay) store.ejectDelay = "12" self.failUnlessEqual(12, store.ejectDelay) store.ejectDelay = "0" self.failUnlessEqual(None, store.ejectDelay) store.ejectDelay = 0 self.failUnlessEqual(None, store.ejectDelay) def testConstructor_046(self): """ Test assignment of ejectDelay attribute, invalid value (not an integer). """ store = StoreConfig() self.failUnlessEqual(None, store.ejectDelay) self.failUnlessAssignRaises(ValueError, store, "ejectDelay", "blech") self.failUnlessEqual(None, store.ejectDelay) self.failUnlessAssignRaises(ValueError, store, "ejectDelay", CollectDir()) self.failUnlessEqual(None, store.ejectDelay) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ store1 = StoreConfig() store2 = StoreConfig() self.failUnlessEqual(store1, store2) self.failUnless(store1 == store2) self.failUnless(not store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(store1 >= store2) self.failUnless(not store1 != store2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failUnlessEqual(store1, store2) self.failUnless(store1 == store2) self.failUnless(not store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(store1 >= store2) self.failUnless(not store1 != store2) def testComparison_003(self): """ Test comparison of two differing objects, sourceDir differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(sourceDir="/whatever") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_004(self): """ Test comparison of two differing objects, sourceDir differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source1", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source2", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_005(self): """ Test comparison of two differing objects, mediaType differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(mediaType="cdr-74") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_006(self): """ Test comparison of two differing objects, mediaType differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdrw-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(not store1 < store2) self.failUnless(not store1 <= store2) self.failUnless(store1 > store2) self.failUnless(store1 >= store2) self.failUnless(store1 != store2) def testComparison_007(self): """ Test comparison of two differing objects, deviceType differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(deviceType="cdwriter") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_008(self): """ Test comparison of two differing objects, devicePath differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(devicePath="/dev/cdrw") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_009(self): """ Test comparison of two differing objects, devicePath differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/hdd", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_010(self): """ Test comparison of two differing objects, deviceScsiId differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(deviceScsiId="0,0,0") self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_011(self): """ Test comparison of two differing objects, deviceScsiId differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "ATA:0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_012(self): """ Test comparison of two differing objects, driveSpeed differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(driveSpeed=3) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_013(self): """ Test comparison of two differing objects, driveSpeed differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_014(self): """ Test comparison of two differing objects, checkData differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, False, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_015(self): """ Test comparison of two differing objects, warnMidnite differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, False, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_016(self): """ Test comparison of two differing objects, noEject differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, False, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_017(self): """ Test comparison of two differing objects, checkMedia differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, False, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_018(self): """ Test comparison of two differing objects, blankBehavior differs (one None). """ behavior = BlankBehavior() store1 = StoreConfig() store2 = StoreConfig(blankBehavior=behavior) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_019(self): """ Test comparison of two differing objects, blankBehavior differs. """ behavior1 = BlankBehavior("daily", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior1, 4, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 4, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_020(self): """ Test comparison of two differing objects, refreshMediaDelay differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(refreshMediaDelay=3) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_021(self): """ Test comparison of two differing objects, refreshMediaDelay differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 1, 5) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_022(self): """ Test comparison of two differing objects, ejectDelay differs (one None). """ store1 = StoreConfig() store2 = StoreConfig(ejectDelay=3) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) def testComparison_023(self): """ Test comparison of two differing objects, ejectDelay differs. """ behavior1 = BlankBehavior("weekly", "1.3") behavior2 = BlankBehavior("weekly", "1.3") store1 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior1, 4, 1) store2 = StoreConfig("/source", "cdr-74", "cdwriter", "/dev/cdrw", "0,0,0", 1, True, True, True, True, behavior2, 4, 5) self.failIfEqual(store1, store2) self.failUnless(not store1 == store2) self.failUnless(store1 < store2) self.failUnless(store1 <= store2) self.failUnless(not store1 > store2) self.failUnless(not store1 >= store2) self.failUnless(store1 != store2) ######################## # TestPurgeConfig class ######################## class TestPurgeConfig(unittest.TestCase): """Tests for the PurgeConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PurgeConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values (empty list). """ purge = PurgeConfig([]) self.failUnlessEqual([], purge.purgeDirs) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values (non-empty list). """ purge = PurgeConfig([PurgeDir(), ]) self.failUnlessEqual([PurgeDir(), ], purge.purgeDirs) def testConstructor_004(self): """ Test assignment of purgeDirs attribute, None value. """ purge = PurgeConfig([]) self.failUnlessEqual([], purge.purgeDirs) purge.purgeDirs = None self.failUnlessEqual(None, purge.purgeDirs) def testConstructor_005(self): """ Test assignment of purgeDirs attribute, [] value. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) purge.purgeDirs = [] self.failUnlessEqual([], purge.purgeDirs) def testConstructor_006(self): """ Test assignment of purgeDirs attribute, single valid entry. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) purge.purgeDirs = [PurgeDir(), ] self.failUnlessEqual([PurgeDir(), ], purge.purgeDirs) def testConstructor_007(self): """ Test assignment of purgeDirs attribute, multiple valid entries. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) purge.purgeDirs = [PurgeDir("/one"), PurgeDir("/two"), ] self.failUnlessEqual([PurgeDir("/one"), PurgeDir("/two"), ], purge.purgeDirs) def testConstructor_009(self): """ Test assignment of purgeDirs attribute, single invalid entry (not a PurgeDir). """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) self.failUnlessAssignRaises(ValueError, purge, "purgeDirs", [ RemotePeer(), ]) self.failUnlessEqual(None, purge.purgeDirs) def testConstructor_010(self): """ Test assignment of purgeDirs attribute, mixed valid and invalid entries. """ purge = PurgeConfig() self.failUnlessEqual(None, purge.purgeDirs) self.failUnlessAssignRaises(ValueError, purge, "purgeDirs", [ PurgeDir(), RemotePeer(), ]) self.failUnlessEqual(None, purge.purgeDirs) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ purge1 = PurgeConfig() purge2 = PurgeConfig() self.failUnlessEqual(purge1, purge2) self.failUnless(purge1 == purge2) self.failUnless(not purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(purge1 >= purge2) self.failUnless(not purge1 != purge2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None (empty lists). """ purge1 = PurgeConfig([]) purge2 = PurgeConfig([]) self.failUnlessEqual(purge1, purge2) self.failUnless(purge1 == purge2) self.failUnless(not purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(purge1 >= purge2) self.failUnless(not purge1 != purge2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None (non-empty lists). """ purge1 = PurgeConfig([PurgeDir(), ]) purge2 = PurgeConfig([PurgeDir(), ]) self.failUnlessEqual(purge1, purge2) self.failUnless(purge1 == purge2) self.failUnless(not purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(purge1 >= purge2) self.failUnless(not purge1 != purge2) def testComparison_004(self): """ Test comparison of two differing objects, purgeDirs differs (one None, one empty). """ purge1 = PurgeConfig(None) purge2 = PurgeConfig([]) self.failIfEqual(purge1, purge2) self.failUnless(not purge1 == purge2) self.failUnless(purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(not purge1 >= purge2) self.failUnless(purge1 != purge2) def testComparison_005(self): """ Test comparison of two differing objects, purgeDirs differs (one None, one not empty). """ purge1 = PurgeConfig(None) purge2 = PurgeConfig([PurgeDir(), ]) self.failIfEqual(purge1, purge2) self.failUnless(not purge1 == purge2) self.failUnless(purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(not purge1 >= purge2) self.failUnless(purge1 != purge2) def testComparison_006(self): """ Test comparison of two differing objects, purgeDirs differs (one empty, one not empty). """ purge1 = PurgeConfig([]) purge2 = PurgeConfig([PurgeDir(), ]) self.failIfEqual(purge1, purge2) self.failUnless(not purge1 == purge2) self.failUnless(purge1 < purge2) self.failUnless(purge1 <= purge2) self.failUnless(not purge1 > purge2) self.failUnless(not purge1 >= purge2) self.failUnless(purge1 != purge2) def testComparison_007(self): """ Test comparison of two differing objects, purgeDirs differs (both not empty). """ purge1 = PurgeConfig([PurgeDir("/two"), ]) purge2 = PurgeConfig([PurgeDir("/one"), ]) self.failIfEqual(purge1, purge2) self.failUnless(not purge1 == purge2) self.failUnless(not purge1 < purge2) self.failUnless(not purge1 <= purge2) self.failUnless(purge1 > purge2) self.failUnless(purge1 >= purge2) self.failUnless(purge1 != purge2) ################### # TestConfig class ################### class TestConfig(unittest.TestCase): """Tests for the Config class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Config() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = Config(validate=False) self.failUnlessEqual(None, config.reference) self.failUnlessEqual(None, config.extensions) self.failUnlessEqual(None, config.options) self.failUnlessEqual(None, config.peers) self.failUnlessEqual(None, config.collect) self.failUnlessEqual(None, config.stage) self.failUnlessEqual(None, config.store) self.failUnlessEqual(None, config.purge) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = Config(validate=True) self.failUnlessEqual(None, config.reference) self.failUnlessEqual(None, config.extensions) self.failUnlessEqual(None, config.options) self.failUnlessEqual(None, config.peers) self.failUnlessEqual(None, config.collect) self.failUnlessEqual(None, config.stage) self.failUnlessEqual(None, config.store) self.failUnlessEqual(None, config.purge) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["cback.conf.2"] contents = open(path).read() self.failUnlessRaises(ValueError, Config, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test with empty config document as data, validate=False. """ path = self.resources["cback.conf.2"] contents = open(path).read() config = Config(xmlData=contents, validate=False) self.failUnlessEqual(None, config.reference) self.failUnlessEqual(None, config.extensions) self.failUnlessEqual(None, config.options) self.failUnlessEqual(None, config.peers) self.failUnlessEqual(None, config.collect) self.failUnlessEqual(None, config.stage) self.failUnlessEqual(None, config.store) self.failUnlessEqual(None, config.purge) def testConstructor_005(self): """ Test with empty config document in a file, validate=False. """ path = self.resources["cback.conf.2"] config = Config(xmlPath=path, validate=False) self.failUnlessEqual(None, config.reference) self.failUnlessEqual(None, config.extensions) self.failUnlessEqual(None, config.options) self.failUnlessEqual(None, config.peers) self.failUnlessEqual(None, config.collect) self.failUnlessEqual(None, config.stage) self.failUnlessEqual(None, config.store) self.failUnlessEqual(None, config.purge) def testConstructor_006(self): """ Test assignment of reference attribute, None value. """ config = Config() config.reference = None self.failUnlessEqual(None, config.reference) def testConstructor_007(self): """ Test assignment of reference attribute, valid value. """ config = Config() config.reference = ReferenceConfig() self.failUnlessEqual(ReferenceConfig(), config.reference) def testConstructor_008(self): """ Test assignment of reference attribute, invalid value (not ReferenceConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "reference", CollectDir()) def testConstructor_009(self): """ Test assignment of extensions attribute, None value. """ config = Config() config.extensions = None self.failUnlessEqual(None, config.extensions) def testConstructor_010(self): """ Test assignment of extensions attribute, valid value. """ config = Config() config.extensions = ExtensionsConfig() self.failUnlessEqual(ExtensionsConfig(), config.extensions) def testConstructor_011(self): """ Test assignment of extensions attribute, invalid value (not ExtensionsConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "extensions", CollectDir()) def testConstructor_012(self): """ Test assignment of options attribute, None value. """ config = Config() config.options = None self.failUnlessEqual(None, config.options) def testConstructor_013(self): """ Test assignment of options attribute, valid value. """ config = Config() config.options = OptionsConfig() self.failUnlessEqual(OptionsConfig(), config.options) def testConstructor_014(self): """ Test assignment of options attribute, invalid value (not OptionsConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "options", CollectDir()) def testConstructor_015(self): """ Test assignment of collect attribute, None value. """ config = Config() config.collect = None self.failUnlessEqual(None, config.collect) def testConstructor_016(self): """ Test assignment of collect attribute, valid value. """ config = Config() config.collect = CollectConfig() self.failUnlessEqual(CollectConfig(), config.collect) def testConstructor_017(self): """ Test assignment of collect attribute, invalid value (not CollectConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "collect", CollectDir()) def testConstructor_018(self): """ Test assignment of stage attribute, None value. """ config = Config() config.stage = None self.failUnlessEqual(None, config.stage) def testConstructor_019(self): """ Test assignment of stage attribute, valid value. """ config = Config() config.stage = StageConfig() self.failUnlessEqual(StageConfig(), config.stage) def testConstructor_020(self): """ Test assignment of stage attribute, invalid value (not StageConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "stage", CollectDir()) def testConstructor_021(self): """ Test assignment of store attribute, None value. """ config = Config() config.store = None self.failUnlessEqual(None, config.store) def testConstructor_022(self): """ Test assignment of store attribute, valid value. """ config = Config() config.store = StoreConfig() self.failUnlessEqual(StoreConfig(), config.store) def testConstructor_023(self): """ Test assignment of store attribute, invalid value (not StoreConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "store", CollectDir()) def testConstructor_024(self): """ Test assignment of purge attribute, None value. """ config = Config() config.purge = None self.failUnlessEqual(None, config.purge) def testConstructor_025(self): """ Test assignment of purge attribute, valid value. """ config = Config() config.purge = PurgeConfig() self.failUnlessEqual(PurgeConfig(), config.purge) def testConstructor_026(self): """ Test assignment of purge attribute, invalid value (not PurgeConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "purge", CollectDir()) def testConstructor_027(self): """ Test assignment of peers attribute, None value. """ config = Config() config.peers = None self.failUnlessEqual(None, config.peers) def testConstructor_028(self): """ Test assignment of peers attribute, valid value. """ config = Config() config.peers = PeersConfig() self.failUnlessEqual(PeersConfig(), config.peers) def testConstructor_029(self): """ Test assignment of peers attribute, invalid value (not PeersConfig). """ config = Config() self.failUnlessAssignRaises(ValueError, config, "peers", CollectDir()) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = Config() config2 = Config() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, reference differs (one None). """ config1 = Config() config2 = Config() config2.reference = ReferenceConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, reference differs. """ config1 = Config() config1.reference = ReferenceConfig(author="one") config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig(author="two") config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_005(self): """ Test comparison of two differing objects, extensions differs (one None). """ config1 = Config() config2 = Config() config2.extensions = ExtensionsConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_006(self): """ Test comparison of two differing objects, extensions differs (one list empty, one None). """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig(None) config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig([]) config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_007(self): """ Test comparison of two differing objects, extensions differs (one list empty, one not empty). """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig([]) config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig([ExtendedAction("one", "two", "three"), ]) config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_008(self): """ Test comparison of two differing objects, extensions differs (both lists not empty). """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig([ExtendedAction("one", "two", "three"), ]) config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig([ExtendedAction("one", "two", "four"), ]) config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(not config1 < config2) self.failUnless(not config1 <= config2) self.failUnless(config1 > config2) self.failUnless(config1 >= config2) self.failUnless(config1 != config2) def testComparison_009(self): """ Test comparison of two differing objects, options differs (one None). """ config1 = Config() config2 = Config() config2.options = OptionsConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_010(self): """ Test comparison of two differing objects, options differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig(startingDay="tuesday") config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig(startingDay="monday") config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(not config1 < config2) self.failUnless(not config1 <= config2) self.failUnless(config1 > config2) self.failUnless(config1 >= config2) self.failUnless(config1 != config2) def testComparison_011(self): """ Test comparison of two differing objects, collect differs (one None). """ config1 = Config() config2 = Config() config2.collect = CollectConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_012(self): """ Test comparison of two differing objects, collect differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig(collectMode="daily") config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig(collectMode="incr") config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_013(self): """ Test comparison of two differing objects, stage differs (one None). """ config1 = Config() config2 = Config() config2.stage = StageConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_014(self): """ Test comparison of two differing objects, stage differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig(targetDir="/something") config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig(targetDir="/whatever") config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_015(self): """ Test comparison of two differing objects, store differs (one None). """ config1 = Config() config2 = Config() config2.store = StoreConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_016(self): """ Test comparison of two differing objects, store differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig(deviceScsiId="ATA:0,0,0") config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig(deviceScsiId="0,0,0") config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(not config1 < config2) self.failUnless(not config1 <= config2) self.failUnless(config1 > config2) self.failUnless(config1 >= config2) self.failUnless(config1 != config2) def testComparison_017(self): """ Test comparison of two differing objects, purge differs (one None). """ config1 = Config() config2 = Config() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_018(self): """ Test comparison of two differing objects, purge differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig(purgeDirs=None) config2 = Config() config2.reference = ReferenceConfig() config2.options = OptionsConfig() config2.peers = PeersConfig() config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig(purgeDirs=[]) self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_019(self): """ Test comparison of two differing objects, peers differs (one None). """ config1 = Config() config2 = Config() config2.peers = PeersConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_020(self): """ Test comparison of two identical objects, peers differs. """ config1 = Config() config1.reference = ReferenceConfig() config1.extensions = ExtensionsConfig() config1.options = OptionsConfig() config1.peers = PeersConfig() config1.collect = CollectConfig() config1.stage = StageConfig() config1.store = StoreConfig() config1.purge = PurgeConfig() config2 = Config() config2.reference = ReferenceConfig() config2.extensions = ExtensionsConfig() config2.options = OptionsConfig() config2.peers = PeersConfig(localPeers=[LocalPeer(), ]) config2.collect = CollectConfig() config2.stage = StageConfig() config2.store = StoreConfig() config2.purge = PurgeConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on an empty reference section. """ config = Config() config.reference = ReferenceConfig() config._validateReference() def testValidate_002(self): """ Test validate on a non-empty reference section, with everything filled in. """ config = Config() config.reference = ReferenceConfig("author", "revision", "description", "generator") config._validateReference() def testValidate_003(self): """ Test validate on an empty extensions section, with a None list. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = None config._validateExtensions() def testValidate_004(self): """ Test validate on an empty extensions section, with [] for the list. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [] config._validateExtensions() def testValidate_005(self): """ Test validate on an a extensions section, with one empty extended action. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_006(self): """ Test validate on an a extensions section, with one extended action that has only a name. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(name="name"), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_007(self): """ Test validate on an a extensions section, with one extended action that has only a module. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(module="module"), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_008(self): """ Test validate on an a extensions section, with one extended action that has only a function. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(function="function"), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_009(self): """ Test validate on an a extensions section, with one extended action that has only an index. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ExtendedAction(index=12), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_010(self): """ Test validate on an a extensions section, with one extended action that makes sense, index order mode. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("one", "two", "three", 100) ] config._validateExtensions() def testValidate_011(self): """ Test validate on an a extensions section, with one extended action that makes sense, dependency order mode. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("one", "two", "three", dependencies=ActionDependencies()) ] config._validateExtensions() def testValidate_012(self): """ Test validate on an a extensions section, with several extended actions that make sense for various kinds of order modes. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ ExtendedAction("a", "b", "c", 1), ExtendedAction("e", "f", "g", 10), ] config._validateExtensions() config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("a", "b", "c", 1), ExtendedAction("e", "f", "g", 10), ] config._validateExtensions() config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] config._validateExtensions() def testValidate_012a(self): """ Test validate on an a extensions section, with several extended actions that don't have the proper ordering modes. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = None config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] self.failUnlessRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] self.failUnlessRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("a", "b", "c", 100), ExtendedAction("e", "f", "g", 12), ] self.failUnlessRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "index" config.extensions.actions = [ ExtendedAction("a", "b", "c", 12), ExtendedAction("e", "f", "g", dependencies=ActionDependencies()), ] self.failUnlessRaises(ValueError, config._validateExtensions) config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("a", "b", "c", dependencies=ActionDependencies()), ExtendedAction("e", "f", "g", 12), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_013(self): """ Test validate on an empty options section. """ config = Config() config.options = OptionsConfig() self.failUnlessRaises(ValueError, config._validateOptions) def testValidate_014(self): """ Test validate on a non-empty options section, with everything filled in. """ config = Config() config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config._validateOptions() def testValidate_015(self): """ Test validate on a non-empty options section, with individual items missing. """ config = Config() config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config._validateOptions() config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.startingDay = None self.failUnlessRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.workingDir = None self.failUnlessRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.backupUser = None self.failUnlessRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.backupGroup = None self.failUnlessRaises(ValueError, config._validateOptions) config.options = OptionsConfig("monday", "/whatever", "user", "group", "command") config.options.rcpCommand = None self.failUnlessRaises(ValueError, config._validateOptions) def testValidate_016(self): """ Test validate on an empty collect section. """ config = Config() config.collect = CollectConfig() self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_017(self): """ Test validate on collect section containing only targetDir. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config._validateCollect() # we no longer validate that at least one file or dir is required here def testValidate_018(self): """ Test validate on collect section containing only targetDir and one collectDirs entry that is empty. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_018a(self): """ Test validate on collect section containing only targetDir and one collectFiles entry that is empty. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectFiles = [ CollectFile(), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_019(self): """ Test validate on collect section containing only targetDir and one collectDirs entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff"), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_019a(self): """ Test validate on collect section containing only targetDir and one collectFiles entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff"), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_020(self): """ Test validate on collect section containing only targetDir and one collectDirs entry with path, collect mode, archive mode and ignore file. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i"), ] config._validateCollect() def testValidate_020a(self): """ Test validate on collect section containing only targetDir and one collectFiles entry with path, collect mode and archive mode. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff", collectMode="incr", archiveMode="tar"), ] config._validateCollect() def testValidate_021(self): """ Test validate on collect section containing targetDir, collect mode, archive mode and ignore file, and one collectDirs entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectMode = "incr" config.collect.archiveMode = "tar" config.collect.ignoreFile = "ignore" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff"), ] config._validateCollect() def testValidate_021a(self): """ Test validate on collect section containing targetDir, collect mode, archive mode and ignore file, and one collectFiles entry with only a path. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectMode = "incr" config.collect.archiveMode = "tar" config.collect.ignoreFile = "ignore" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff"), ] config._validateCollect() def testValidate_022(self): """ Test validate on collect section containing targetDir, but with collect mode, archive mode and ignore file mixed between main section and directories. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.archiveMode = "tar" config.collect.ignoreFile = "ignore" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", ignoreFile="i"), ] config._validateCollect() config.collect.collectDirs.append(CollectDir(absolutePath="/stuff2")) self.failUnlessRaises(ValueError, config._validateCollect) config.collect.collectDirs[-1].collectMode = "daily" config._validateCollect() def testValidate_022a(self): """ Test validate on collect section containing targetDir, but with collect mode, and archive mode mixed between main section and directories. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.archiveMode = "tar" config.collect.collectFiles = [ CollectFile(absolutePath="/stuff", collectMode="incr", archiveMode="targz"), ] config._validateCollect() config.collect.collectFiles.append(CollectFile(absolutePath="/stuff2")) self.failUnlessRaises(ValueError, config._validateCollect) config.collect.collectFiles[-1].collectMode = "daily" config._validateCollect() def testValidate_023(self): """ Test validate on an empty stage section. """ config = Config() config.stage = StageConfig() self.failUnlessRaises(ValueError, config._validateStage) def testValidate_024(self): """ Test validate on stage section containing only targetDir and None for the lists. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = None config.stage.remotePeers = None self.failUnlessRaises(ValueError, config._validateStage) def testValidate_025(self): """ Test validate on stage section containing only targetDir and [] for the lists. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_026(self): """ Test validate on stage section containing targetDir and one local peer that is empty. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_027(self): """ Test validate on stage section containing targetDir and one local peer with only a name. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="name"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_028(self): """ Test validate on stage section containing targetDir and one local peer with a name and path, None for remote list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.stage.remotePeers = None config._validateStage() def testValidate_029(self): """ Test validate on stage section containing targetDir and one local peer with a name and path, [] for remote list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.stage.remotePeers = [] config._validateStage() def testValidate_030(self): """ Test validate on stage section containing targetDir and one remote peer that is empty. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.remotePeers = [RemotePeer(), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_031(self): """ Test validate on stage section containing targetDir and one remote peer with only a name. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.remotePeers = [RemotePeer(name="blech"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_032(self): """ Test validate on stage section containing targetDir and one remote peer with a name and path, None for local list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = None config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" config._validateStage() def testValidate_033(self): """ Test validate on stage section containing targetDir and one remote peer with a name and path, [] for local list. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" config._validateStage() def testValidate_034(self): """ Test validate on stage section containing targetDir and one remote and one local peer. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), ] config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" config._validateStage() def testValidate_035(self): """ Test validate on stage section containing targetDir multiple remote and local peers. """ config = Config() config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), LocalPeer("one", "/two"), LocalPeer("a", "/b"), ] config.stage.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), RemotePeer("c", "/d"), ] self.failUnlessRaises(ValueError, config._validateStage) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validateStage() config.options = None self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[-1].remoteUser = "remote" config.stage.remotePeers[-1].rcpCommand = "command" self.failUnlessRaises(ValueError, config._validateStage) config.stage.remotePeers[0].remoteUser = "remote" config.stage.remotePeers[0].rcpCommand = "command" config._validateStage() def testValidate_036(self): """ Test validate on an empty store section. """ config = Config() config.store = StoreConfig() self.failUnlessRaises(ValueError, config._validateStore) def testValidate_037(self): """ Test validate on store section with everything filled in. """ config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-80" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-80" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+r" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+rw" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() def testValidate_038(self): """ Test validate on store section missing one each of required fields. """ config = Config() config.store = StoreConfig() config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config.store = StoreConfig() config.store.sourceDir = "/source" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) def testValidate_039(self): """ Test validate on store section missing one each of device type, drive speed and capacity mode and the booleans. """ config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.warnMidnite = True config.store.noEject = True config._validateStore() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True config._validateStore() def testValidate_039a(self): """ Test validate on store section with everything filled in, but mismatch device/media. """ config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-74" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-74" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdr-80" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "cdrw-80" config.store.deviceType = "dvdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+rw" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) config = Config() config.store = StoreConfig() config.store.sourceDir = "/source" config.store.mediaType = "dvd+r" config.store.deviceType = "cdwriter" config.store.devicePath = "/dev/cdrw" config.store.deviceScsiId = "0,0,0" config.store.driveSpeed = 4 config.store.checkData = True config.store.checkMedia = True config.store.warnMidnite = True config.store.noEject = True self.failUnlessRaises(ValueError, config._validateStore) def testValidate_040(self): """ Test validate on an empty purge section, with a None list. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = None config._validatePurge() def testValidate_041(self): """ Test validate on an empty purge section, with [] for the list. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [] config._validatePurge() def testValidate_042(self): """ Test validate on an a purge section, with one empty purge dir. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [PurgeDir(), ] self.failUnlessRaises(ValueError, config._validatePurge) def testValidate_043(self): """ Test validate on an a purge section, with one purge dir that has only a path. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [PurgeDir(absolutePath="/whatever"), ] self.failUnlessRaises(ValueError, config._validatePurge) def testValidate_044(self): """ Test validate on an a purge section, with one purge dir that has only retain days. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [PurgeDir(retainDays=3), ] self.failUnlessRaises(ValueError, config._validatePurge) def testValidate_045(self): """ Test validate on an a purge section, with one purge dir that makes sense. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [ PurgeDir(absolutePath="/whatever", retainDays=4), ] config._validatePurge() def testValidate_046(self): """ Test validate on an a purge section, with several purge dirs that make sense. """ config = Config() config.purge = PurgeConfig() config.purge.purgeDirs = [ PurgeDir("/whatever", 4), PurgeDir("/etc/different", 12), ] config._validatePurge() def testValidate_047(self): """ Test that we catch a duplicate extended action name. """ config = Config() config.extensions = ExtensionsConfig() config.extensions.orderMode = "dependency" config.extensions.actions = [ ExtendedAction("unique1", "b", "c", dependencies=ActionDependencies()), ExtendedAction("unique2", "f", "g", dependencies=ActionDependencies()), ] config._validateExtensions() config.extensions.actions = [ ExtendedAction("duplicate", "b", "c", dependencies=ActionDependencies()), ExtendedAction("duplicate", "f", "g", dependencies=ActionDependencies()), ] self.failUnlessRaises(ValueError, config._validateExtensions) def testValidate_048(self): """ Test that we catch a duplicate local peer name in stage configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), LocalPeer(name="unique2", collectDir="/nowhere"), ] config._validateStage() config.stage.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), LocalPeer(name="duplicate", collectDir="/nowhere"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_049(self): """ Test that we catch a duplicate remote peer name in stage configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.remotePeers = [ RemotePeer(name="unique1", collectDir="/some/path/to/data"), RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validateStage() config.stage.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_050(self): """ Test that we catch a duplicate peer name duplicated between remote and local in stage configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.stage = StageConfig() config.stage.targetDir = "/whatever" config.stage.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.stage.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validateStage() config.stage.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), ] config.stage.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validateStage) def testValidate_051(self): """ Test validate on a None peers section. """ config = Config() config.peers = None config._validatePeers() def testValidate_052(self): """ Test validate on an empty peers section. """ config = Config() config.peers = PeersConfig() self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_053(self): """ Test validate on peers section containing None for the lists. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = None config.peers.remotePeers = None self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_054(self): """ Test validate on peers section containing [] for the lists. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_055(self): """ Test validate on peers section containing one local peer that is empty. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_056(self): """ Test validate on peers section containing local peer with only a name. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="name"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_057(self): """ Test validate on peers section containing one local peer with a name and path, None for remote list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.peers.remotePeers = None config._validatePeers() def testValidate_058(self): """ Test validate on peers section containing one local peer with a name and path, [] for remote list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="name", collectDir="/somewhere"), ] config.peers.remotePeers = [] config._validatePeers() def testValidate_059(self): """ Test validate on peers section containing one remote peer that is empty. """ config = Config() config.peers = PeersConfig() config.peers.remotePeers = [RemotePeer(), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_060(self): """ Test validate on peers section containing one remote peer with only a name. """ config = Config() config.peers = PeersConfig() config.peers.remotePeers = [RemotePeer(name="blech"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_061(self): """ Test validate on peers section containing one remote peer with a name and path, None for local list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = None config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" config._validatePeers() def testValidate_062(self): """ Test validate on peers section containing one remote peer with a name and path, [] for local list. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" config._validatePeers() def testValidate_063(self): """ Test validate on peers section containing one remote and one local peer. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), ] config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" config._validatePeers() def testValidate_064(self): """ Test validate on peers section containing multiple remote and local peers. """ config = Config() config.peers = PeersConfig() config.peers.localPeers = [LocalPeer(name="metoo", collectDir="/nowhere"), LocalPeer("one", "/two"), LocalPeer("a", "/b"), ] config.peers.remotePeers = [RemotePeer(name="blech", collectDir="/some/path/to/data"), RemotePeer("c", "/d"), ] self.failUnlessRaises(ValueError, config._validatePeers) config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config._validatePeers() config.options = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[-1].remoteUser = "remote" config.peers.remotePeers[-1].rcpCommand = "command" self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].remoteUser = "remote" config.peers.remotePeers[0].rcpCommand = "command" config._validatePeers() def testValidate_065(self): """ Test that we catch a duplicate local peer name in peers configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), LocalPeer(name="unique2", collectDir="/nowhere"), ] config._validatePeers() config.peers.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), LocalPeer(name="duplicate", collectDir="/nowhere"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_066(self): """ Test that we catch a duplicate remote peer name in peers configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.peers.remotePeers = [ RemotePeer(name="unique1", collectDir="/some/path/to/data"), RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validatePeers() config.peers.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_067(self): """ Test that we catch a duplicate peer name duplicated between remote and local in peers configuration. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config._validatePeers() config.peers.localPeers = [ LocalPeer(name="duplicate", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="duplicate", collectDir="/some/path/to/data"), ] self.failUnlessRaises(ValueError, config._validatePeers) def testValidate_068(self): """ Test that stage peers can be None, if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = None config.stage.remotePeers = None config._validatePeers() config._validateStage() def testValidate_069(self): """ Test that stage peers can be empty lists, if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [] config._validatePeers() config._validateStage() def testValidate_070(self): """ Test that staging local peers must be valid if filled in, even if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(), ] # empty local peer is invalid, so validation should catch it config.stage.remotePeers = [] config._validatePeers() self.failUnlessRaises(ValueError, config._validateStage) def testValidate_071(self): """ Test that staging remote peers must be valid if filled in, even if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [] config.stage.remotePeers = [RemotePeer(), ] # empty remote peer is invalid, so validation should catch it config._validatePeers() self.failUnlessRaises(ValueError, config._validateStage) def testValidate_072(self): """ Test that staging local and remote peers must be valid if filled in, even if peers configuration is not None. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="command") config.peers = PeersConfig() config.stage = StageConfig() config.peers.localPeers = [ LocalPeer(name="unique1", collectDir="/nowhere"), ] config.peers.remotePeers = [ RemotePeer(name="unique2", collectDir="/some/path/to/data"), ] config.stage.targetDir = "/whatever" config.stage.localPeers = [LocalPeer(), ] # empty local peer is invalid, so validation should catch it config.stage.remotePeers = [RemotePeer(), ] # empty remote peer is invalid, so validation should catch it config._validatePeers() self.failUnlessRaises(ValueError, config._validateStage) def testValidate_073(self): """ Confirm that remote peer is required to have backup user if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.backupUser = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].remoteUser = "ken" config._validatePeers() def testValidate_074(self): """ Confirm that remote peer is required to have rcp command if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.rcpCommand = None self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].rcpCommand = "rcp" config._validatePeers() def testValidate_075(self): """ Confirm that remote managed peer is required to have rsh command if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.rshCommand = None config._validatePeers() config.peers.remotePeers[0].managed = True self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].rshCommand = "rsh" config._validatePeers() def testValidate_076(self): """ Confirm that remote managed peer is required to have cback command if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.cbackCommand = None config._validatePeers() config.peers.remotePeers[0].managed = True self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].cbackCommand = "cback" config._validatePeers() def testValidate_077(self): """ Confirm that remote managed peer is required to have managed actions list if not set in options. """ config = Config() config.options = OptionsConfig(backupUser="ken", rcpCommand="rcp", rshCommand="rsh", cbackCommand="cback", managedActions=["collect"], ) config.peers = PeersConfig() config.peers.localPeers = [] config.peers.remotePeers = [ RemotePeer(name="remote", collectDir="/path"), ] config._validatePeers() config.options.managedActions = None config._validatePeers() config.peers.remotePeers[0].managed = True self.failUnlessRaises(ValueError, config._validatePeers) config.options.managedActions = [] self.failUnlessRaises(ValueError, config._validatePeers) config.peers.remotePeers[0].managedActions = ["collect", ] config._validatePeers() def testValidate_078(self): """ Test case where dereference is True but link depth is None. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=None, dereference=True), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_079(self): """ Test case where dereference is True but link depth is zero. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=0, dereference=True), ] self.failUnlessRaises(ValueError, config._validateCollect) def testValidate_080(self): """ Test case where dereference is False and linkDepth is None. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=None, dereference=False), ] config._validateCollect() def testValidate_081(self): """ Test case where dereference is None and linkDepth is None. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=None, dereference=None), ] config._validateCollect() def testValidate_082(self): """ Test case where dereference is False and linkDepth is zero. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=0, dereference=False), ] config._validateCollect() def testValidate_083(self): """ Test case where dereference is None and linkDepth is zero. """ config = Config() config.collect = CollectConfig() config.collect.targetDir = "/whatever" config.collect.collectDirs = [ CollectDir(absolutePath="/stuff", collectMode="incr", archiveMode="tar", ignoreFile="i", linkDepth=0, dereference=None), ] config._validateCollect() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document, validate=False. """ path = self.resources["cback.conf.2"] config = Config(xmlPath=path, validate=False) expected = Config() self.failUnlessEqual(expected, config) def testParse_002(self): """ Parse empty config document, validate=True. """ path = self.resources["cback.conf.2"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_003(self): """ Parse config document containing only a reference section, containing only required fields, validate=False. """ path = self.resources["cback.conf.3"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig() self.failUnlessEqual(expected, config) def testParse_004(self): """ Parse config document containing only a reference section, containing only required fields, validate=True. """ path = self.resources["cback.conf.3"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_005(self): """ Parse config document containing only a reference section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.4"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") self.failUnlessEqual(expected, config) def testParse_006(self): """ Parse config document containing only a reference section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.4"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_007(self): """ Parse config document containing only a extensions section, containing only required fields, validate=False. """ path = self.resources["cback.conf.16"] config = Config(xmlPath=path, validate=False) expected = Config() expected.extensions = ExtensionsConfig() expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 1)) self.failUnlessEqual(expected, config) def testParse_008(self): """ Parse config document containing only a extensions section, containing only required fields, validate=True. """ path = self.resources["cback.conf.16"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_009(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "index", validate=False. """ path = self.resources["cback.conf.18"] config = Config(xmlPath=path, validate=False) expected = Config() expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "index" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 1)) self.failUnlessEqual(expected, config) def testParse_009a(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "dependency", validate=False. """ path = self.resources["cback.conf.19"] config = Config(xmlPath=path, validate=False) expected = Config() expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("sysinfo", "CedarBackup2.extend.sysinfo", "executeAction", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("mysql", "CedarBackup2.extend.mysql", "executeAction", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("postgresql", "CedarBackup2.extend.postgresql", "executeAction", index=None, dependencies=ActionDependencies(beforeList=["one", ]))) expected.extensions.actions.append(ExtendedAction("subversion", "CedarBackup2.extend.subversion", "executeAction", index=None, dependencies=ActionDependencies(afterList=["one", ]))) expected.extensions.actions.append(ExtendedAction("mbox", "CedarBackup2.extend.mbox", "executeAction", index=None, dependencies=ActionDependencies(beforeList=["one", ], afterList=["one", ]))) expected.extensions.actions.append(ExtendedAction("encrypt", "CedarBackup2.extend.encrypt", "executeAction", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", "d", ], afterList=["one", "two", "three", "four", "five", "six", "seven", "eight", ]))) expected.extensions.actions.append(ExtendedAction("amazons3", "CedarBackup2.extend.amazons3", "executeAction", index=None, dependencies=ActionDependencies())) self.failUnlessEqual(expected, config) def testParse_010(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "index", validate=True. """ path = self.resources["cback.conf.18"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_010a(self): """ Parse config document containing only a extensions section, containing all fields, order mode is "dependency", validate=True. """ path = self.resources["cback.conf.19"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_011(self): """ Parse config document containing only an options section, containing only required fields, validate=False. """ path = self.resources["cback.conf.5"] config = Config(xmlPath=path, validate=False) expected = Config() expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B") self.failUnlessEqual(expected, config) def testParse_012(self): """ Parse config document containing only an options section, containing only required fields, validate=True. """ path = self.resources["cback.conf.5"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_013(self): """ Parse config document containing only an options section, containing required and optional fields, validate=False. """ path = self.resources["cback.conf.6"] config = Config(xmlPath=path, validate=False) expected = Config() expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] self.failUnlessEqual(expected, config) def testParse_014(self): """ Parse config document containing only an options section, containing required and optional fields, validate=True. """ path = self.resources["cback.conf.6"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_015(self): """ Parse config document containing only a collect section, containing only required fields, validate=False. (Case with single collect directory.) """ path = self.resources["cback.conf.7"] config = Config(xmlPath=path, validate=False) expected = Config() expected.collect = CollectConfig("/opt/backup/collect", "daily", "tar", ".ignore") expected.collect.collectDirs = [CollectDir(absolutePath="/etc"), ] self.failUnlessEqual(expected, config) def testParse_015a(self): """ Parse config document containing only a collect section, containing only required fields, validate=False. (Case with single collect file.) """ path = self.resources["cback.conf.17"] config = Config(xmlPath=path, validate=False) expected = Config() expected.collect = CollectConfig("/opt/backup/collect", "daily", "tar", ".ignore") expected.collect.collectFiles = [CollectFile(absolutePath="/etc"), ] self.failUnlessEqual(expected, config) def testParse_016(self): """ Parse config document containing only a collect section, containing only required fields, validate=True. (Case with single collect directory.) """ path = self.resources["cback.conf.7"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_016a(self): """ Parse config document containing only a collect section, containing only required fields, validate=True. (Case with single collect file.) """ path = self.resources["cback.conf.17"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_017(self): """ Parse config document containing only a collect section, containing required and optional fields, validate=False. """ path = self.resources["cback.conf.8"] config = Config(xmlPath=path, validate=False) expected = Config() expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root", recursionLevel=1)) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) self.failUnlessEqual(expected, config) def testParse_018(self): """ Parse config document containing only a collect section, containing required and optional fields, validate=True. """ path = self.resources["cback.conf.8"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_019(self): """ Parse config document containing only a stage section, containing only required fields, validate=False. """ path = self.resources["cback.conf.9"] config = Config(xmlPath=path, validate=False) expected = Config() expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = None expected.stage.remotePeers = [ RemotePeer("machine2", "/opt/backup/collect"), ] self.failUnlessEqual(expected, config) def testParse_020(self): """ Parse config document containing only a stage section, containing only required fields, validate=True. """ path = self.resources["cback.conf.9"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_021(self): """ Parse config document containing only a stage section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.10"] config = Config(xmlPath=path, validate=False) expected = Config() expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) self.failUnlessEqual(expected, config) def testParse_022(self): """ Parse config document containing only a stage section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.10"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_023(self): """ Parse config document containing only a store section, containing only required fields, validate=False. """ path = self.resources["cback.conf.11"] config = Config(xmlPath=path, validate=False) expected = Config() expected.store = StoreConfig("/opt/backup/staging", mediaType="cdrw-74", devicePath="/dev/cdrw", deviceScsiId=None) self.failUnlessEqual(expected, config) def testParse_024(self): """ Parse config document containing only a store section, containing only required fields, validate=True. """ path = self.resources["cback.conf.11"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_025(self): """ Parse config document containing only a store section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.12"] config = Config(xmlPath=path, validate=False) expected = Config() expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "cdrw-74" expected.store.deviceType = "cdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = "0,0,0" expected.store.driveSpeed = 4 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.refreshMediaDelay = 12 expected.store.ejectDelay = 13 expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" self.failUnlessEqual(expected, config) def testParse_026(self): """ Parse config document containing only a store section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.12"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_027(self): """ Parse config document containing only a purge section, containing only required fields, validate=False. """ path = self.resources["cback.conf.13"] config = Config(xmlPath=path, validate=False) expected = Config() expected.purge = PurgeConfig() expected.purge.purgeDirs = [PurgeDir("/opt/backup/stage", 5), ] self.failUnlessEqual(expected, config) def testParse_028(self): """ Parse config document containing only a purge section, containing only required fields, validate=True. """ path = self.resources["cback.conf.13"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_029(self): """ Parse config document containing only a purge section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.14"] config = Config(xmlPath=path, validate=False) expected = Config() expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_030(self): """ Parse config document containing only a purge section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.14"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_031(self): """ Parse complete document containing all required and optional fields, "index" extensions, validate=False. """ path = self.resources["cback.conf.15"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "index" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 102)) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", 350)) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "cdrw-74" expected.store.deviceType = "cdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 4 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_031a(self): """ Parse complete document containing all required and optional fields, "dependency" extensions, validate=False. """ path = self.resources["cback.conf.20"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_032(self): """ Parse complete document containing all required and optional fields, "index" extensions, validate=True. """ path = self.resources["cback.conf.15"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "index" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", 102)) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", 350)) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "cdrw-74" expected.store.deviceType = "cdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 4 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_032a(self): """ Parse complete document containing all required and optional fields, "dependency" extensions, validate=True. """ path = self.resources["cback.conf.20"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [] expected.stage.remotePeers = [] expected.stage.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.stage.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.stage.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.stage.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_033(self): """ Parse a sample from Cedar Backup v1.x, which must still be valid, validate=False. """ path = self.resources["cback.conf.1"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B") expected.collect = CollectConfig() expected.collect.targetDir = "/opt/backup/collect" expected.collect.archiveMode = "targz" expected.collect.ignoreFile = ".cbignore" expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir("/etc", collectMode="daily")) expected.collect.collectDirs.append(CollectDir("/var/log", collectMode="incr")) collectDir = CollectDir("/opt", collectMode="weekly") collectDir.absoluteExcludePaths = ["/opt/large", "/opt/backup", "/opt/tmp", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] expected.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = "0,0,0" expected.store.driveSpeed = 4 expected.store.mediaType = "cdrw-74" expected.store.checkData = True expected.store.checkMedia = False expected.store.warnMidnite = False expected.store.noEject = False expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) self.failUnlessEqual(expected, config) def testParse_034(self): """ Parse a sample from Cedar Backup v1.x, which must still be valid, validate=True. """ path = self.resources["cback.conf.1"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B") expected.collect = CollectConfig() expected.collect.targetDir = "/opt/backup/collect" expected.collect.archiveMode = "targz" expected.collect.ignoreFile = ".cbignore" expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir("/etc", collectMode="daily")) expected.collect.collectDirs.append(CollectDir("/var/log", collectMode="incr")) collectDir = CollectDir("/opt", collectMode="weekly") collectDir.absoluteExcludePaths = ["/opt/large", "/opt/backup", "/opt/tmp", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] expected.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = "0,0,0" expected.store.driveSpeed = 4 expected.store.mediaType = "cdrw-74" expected.store.checkData = True expected.store.checkMedia = False expected.store.warnMidnite = False expected.store.noEject = False expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) self.failUnlessEqual(expected, config) def testParse_035(self): """ Document containing all required fields, peers in peer configuration and not staging, validate=False. """ path = self.resources["cback.conf.21"] config = Config(xmlPath=path, validate=False) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.peers = PeersConfig() expected.peers.localPeers = [] expected.peers.remotePeers = [] expected.peers.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.peers.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.peers.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.peers.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.peers.remotePeers.append(RemotePeer("machine4", "/aa", remoteUser="someone", rcpCommand="scp -B", rshCommand="ssh", cbackCommand="cback", managed=True, managedActions=None)) expected.peers.remotePeers.append(RemotePeer("machine5", "/bb", managed=False, managedActions=["collect", "purge", ])) expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = None expected.stage.remotePeers = None expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_036(self): """ Document containing all required fields, peers in peer configuration and not staging, validate=True. """ path = self.resources["cback.conf.21"] config = Config(xmlPath=path, validate=True) expected = Config() expected.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration", "Generated by hand.") expected.extensions = ExtensionsConfig() expected.extensions.orderMode = "dependency" expected.extensions.actions = [] expected.extensions.actions.append(ExtendedAction("example", "something.whatever", "example", index=None, dependencies=ActionDependencies())) expected.extensions.actions.append(ExtendedAction("bogus", "module", "something", index=None, dependencies=ActionDependencies(beforeList=["a", "b", "c", ], afterList=["one", ]))) expected.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "group", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh", "/usr/bin/cback", []) expected.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] expected.options.hooks = [ PreActionHook("collect", "ls -l"), PreActionHook("subversion", "mailx -S \"hello\""), PostActionHook("stage", "df -k"), ] expected.options.managedActions = [ "collect", "purge", ] expected.peers = PeersConfig() expected.peers.localPeers = [] expected.peers.remotePeers = [] expected.peers.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.peers.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.peers.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.peers.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.peers.remotePeers.append(RemotePeer("machine4", "/aa", remoteUser="someone", rcpCommand="scp -B", rshCommand="ssh", cbackCommand="cback", managed=True, managedActions=None)) expected.peers.remotePeers.append(RemotePeer("machine5", "/bb", managed=False, managedActions=["collect", "purge", ])) expected.collect = CollectConfig("/opt/backup/collect", "daily", "targz", ".cbignore") expected.collect.absoluteExcludePaths = ["/etc/cback.conf", "/etc/X11", ] expected.collect.excludePatterns = [".*tmp.*", r".*\.netscape\/.*", ] expected.collect.collectFiles = [] expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.profile")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.kshrc", collectMode="weekly")) expected.collect.collectFiles.append(CollectFile(absolutePath="/home/root/.aliases", collectMode="daily", archiveMode="tarbz2")) expected.collect.collectDirs = [] expected.collect.collectDirs.append(CollectDir(absolutePath="/root")) expected.collect.collectDirs.append(CollectDir(absolutePath="/tmp", linkDepth=3)) expected.collect.collectDirs.append(CollectDir(absolutePath="/ken", linkDepth=1, dereference=True)) expected.collect.collectDirs.append(CollectDir(absolutePath="/var/log", collectMode="incr")) expected.collect.collectDirs.append(CollectDir(absolutePath="/etc", collectMode="incr", archiveMode="tar", ignoreFile=".ignore")) collectDir = CollectDir(absolutePath="/opt") collectDir.absoluteExcludePaths = [ "/opt/share", "/opt/tmp", ] collectDir.relativeExcludePaths = [ "large", "backup", ] collectDir.excludePatterns = [ r".*\.doc\.*", r".*\.xls\.*", ] expected.collect.collectDirs.append(collectDir) expected.stage = StageConfig() expected.stage.targetDir = "/opt/backup/staging" expected.stage.localPeers = None expected.stage.remotePeers = None expected.store = StoreConfig() expected.store.sourceDir = "/opt/backup/staging" expected.store.mediaType = "dvd+rw" expected.store.deviceType = "dvdwriter" expected.store.devicePath = "/dev/cdrw" expected.store.deviceScsiId = None expected.store.driveSpeed = 1 expected.store.checkData = True expected.store.checkMedia = True expected.store.warnMidnite = True expected.store.noEject = True expected.store.blankBehavior = BlankBehavior() expected.store.blankBehavior.blankMode = "weekly" expected.store.blankBehavior.blankFactor = "1.3" expected.purge = PurgeConfig() expected.purge.purgeDirs = [] expected.purge.purgeDirs.append(PurgeDir("/opt/backup/stage", 5)) expected.purge.purgeDirs.append(PurgeDir("/opt/backup/collect", 0)) expected.purge.purgeDirs.append(PurgeDir("/home/backup/tmp", 12)) self.failUnlessEqual(expected, config) def testParse_037(self): """ Parse config document containing only a peers section, containing only required fields, validate=False. """ path = self.resources["cback.conf.22"] config = Config(xmlPath=path, validate=False) expected = Config() expected.peers = PeersConfig() expected.peers.localPeers = None expected.peers.remotePeers = [ RemotePeer("machine2", "/opt/backup/collect"), ] self.failUnlessEqual(expected, config) def testParse_038(self): """ Parse config document containing only a peers section, containing only required fields, validate=True. """ path = self.resources["cback.conf.9"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) def testParse_039(self): """ Parse config document containing only a peers section, containing all required and optional fields, validate=False. """ path = self.resources["cback.conf.23"] config = Config(xmlPath=path, validate=False) expected = Config() expected.peers = PeersConfig() expected.peers.localPeers = [] expected.peers.remotePeers = [] expected.peers.localPeers.append(LocalPeer("machine1-1", "/opt/backup/collect")) expected.peers.localPeers.append(LocalPeer("machine1-2", "/var/backup")) expected.peers.remotePeers.append(RemotePeer("machine2", "/backup/collect", ignoreFailureMode="all")) expected.peers.remotePeers.append(RemotePeer("machine3", "/home/whatever/tmp", remoteUser="someone", rcpCommand="scp -B")) expected.peers.remotePeers.append(RemotePeer("machine4", "/aa", remoteUser="someone", rcpCommand="scp -B", rshCommand="ssh", cbackCommand="cback", managed=True, managedActions=None)) expected.peers.remotePeers.append(RemotePeer("machine5", "/bb", managed=False, managedActions=["collect", "purge", ])) self.failUnlessEqual(expected, config) def testParse_040(self): """ Parse config document containing only a peers section, containing all required and optional fields, validate=True. """ path = self.resources["cback.conf.23"] self.failUnlessRaises(ValueError, Config, xmlPath=path, validate=True) ######################### # Test the extract logic ######################### def testExtractXml_001(self): """ Extract empty config document, validate=True. """ before = Config() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_002(self): """ Extract empty config document, validate=False. """ before = Config() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_003(self): """ Extract document containing only a valid reference section, validate=True. """ before = Config() before.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_004(self): """ Extract document containing only a valid reference section, validate=False. """ before = Config() before.reference = ReferenceConfig("$Author: pronovic $", "1.3", "Sample configuration") beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_005(self): """ Extract document containing only a valid extensions section, empty list, orderMode=None, validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = None before.extensions.actions = [] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_006(self): """ Extract document containing only a valid extensions section, non-empty list and orderMode="index", validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = "index" before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", "function", 1)) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_006a(self): """ Extract document containing only a valid extensions section, non-empty list and orderMode="dependency", validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = "dependency" before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", "function", dependencies=ActionDependencies(beforeList=["b", ], afterList=["a", ]))) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_007(self): """ Extract document containing only a valid extensions section, empty list, orderMode=None, validate=False. """ before = Config() before.extensions = ExtensionsConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_008(self): """ Extract document containing only a valid extensions section, orderMode="index", validate=False. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.orderMode = "index" before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", "function", 1)) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_009(self): """ Extract document containing only an invalid extensions section, validate=True. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", None, None)) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_010(self): """ Extract document containing only an invalid extensions section, validate=False. """ before = Config() before.extensions = ExtensionsConfig() before.extensions.actions = [] before.extensions.actions.append(ExtendedAction("name", "module", None, None)) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_011(self): """ Extract document containing only a valid options section, validate=True. """ before = Config() before.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh") before.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] before.options.hooks = [ PostActionHook("collect", "ls -l"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_012(self): """ Extract document containing only a valid options section, validate=False. """ before = Config() before.options = OptionsConfig("tuesday", "/opt/backup/tmp", "backup", "backup", "/usr/bin/scp -1 -B", [], [], "/usr/bin/ssh") before.options.overrides = [ CommandOverride("mkisofs", "/usr/bin/mkisofs"), CommandOverride("svnlook", "/svnlook"), ] before.options.hooks = [ PostActionHook("collect", "ls -l"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_013(self): """ Extract document containing only an invalid options section, validate=True. """ before = Config() before.options = OptionsConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_014(self): """ Extract document containing only an invalid options section, validate=False. """ before = Config() before.options = OptionsConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_015(self): """ Extract document containing only a valid collect section, empty lists, validate=True. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_015a(self): """ Extract document containing only a valid collect section, empty lists, validate=True. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_016(self): """ Extract document containing only a valid collect section, empty lists, validate=False. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_016a(self): """ Extract document containing only a valid collect section, empty lists, validate=False. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_017(self): """ Extract document containing only a valid collect section, non-empty lists, validate=True. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_017a(self): """ Extract document containing only a valid collect section, non-empty lists, validate=True. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_018(self): """ Extract document containing only a valid collect section, non-empty lists, validate=False. (Test a directory.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectDirs = [CollectDir("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_018a(self): """ Extract document containing only a valid collect section, non-empty lists, validate=False. (Test a file.) """ before = Config() before.collect = CollectConfig() before.collect.targetDir = "/opt/backup/collect" before.collect.archiveMode = "targz" before.collect.ignoreFile = ".cbignore" before.collect.absoluteExcludePaths = [ "/one", "/two", "/three", ] before.collect.excludePatterns = [ "pattern", ] before.collect.collectFiles = [CollectFile("/etc", collectMode="daily"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_019(self): """ Extract document containing only an invalid collect section, validate=True. """ before = Config() before.collect = CollectConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_020(self): """ Extract document containing only an invalid collect section, validate=False. """ before = Config() before.collect = CollectConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_021(self): """ Extract document containing only a valid stage section, one empty list, validate=True. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = None self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_022(self): """ Extract document containing only a valid stage section, empty lists, validate=False. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = None beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_023(self): """ Extract document containing only a valid stage section, non-empty lists, validate=True. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_024(self): """ Extract document containing only a valid stage section, non-empty lists, validate=False. """ before = Config() before.stage = StageConfig() before.stage.targetDir = "/opt/backup/staging" before.stage.localPeers = [LocalPeer("machine1", "/opt/backup/collect"), ] before.stage.remotePeers = [RemotePeer("machine2", "/opt/backup/collect", remoteUser="backup"), ] beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_025(self): """ Extract document containing only an invalid stage section, validate=True. """ before = Config() before.stage = StageConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_026(self): """ Extract document containing only an invalid stage section, validate=False. """ before = Config() before.stage = StageConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_027(self): """ Extract document containing only a valid store section, validate=True. """ before = Config() before.store = StoreConfig() before.store.sourceDir = "/opt/backup/staging" before.store.devicePath = "/dev/cdrw" before.store.deviceScsiId = "0,0,0" before.store.driveSpeed = 4 before.store.mediaType = "cdrw-74" before.store.checkData = True before.store.checkMedia = True before.store.warnMidnite = True before.store.noEject = True before.store.refreshMediaDelay = 12 before.store.ejectDelay = 13 self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_028(self): """ Extract document containing only a valid store section, validate=False. """ before = Config() before.store = StoreConfig() before.store.sourceDir = "/opt/backup/staging" before.store.devicePath = "/dev/cdrw" before.store.deviceScsiId = "0,0,0" before.store.driveSpeed = 4 before.store.mediaType = "cdrw-74" before.store.checkData = True before.store.checkMedia = True before.store.warnMidnite = True before.store.noEject = True before.store.refreshMediaDelay = 12 before.store.ejectDelay = 13 beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_029(self): """ Extract document containing only an invalid store section, validate=True. """ before = Config() before.store = StoreConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_030(self): """ Extract document containing only an invalid store section, validate=False. """ before = Config() before.store = StoreConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_031(self): """ Extract document containing only a valid purge section, empty list, validate=True. """ before = Config() before.purge = PurgeConfig() self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_032(self): """ Extract document containing only a valid purge section, empty list, validate=False. """ before = Config() before.purge = PurgeConfig() beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_033(self): """ Extract document containing only a valid purge section, non-empty list, validate=True. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever", retainDays=3)) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_034(self): """ Extract document containing only a valid purge section, non-empty list, validate=False. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever", retainDays=3)) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_035(self): """ Extract document containing only an invalid purge section, validate=True. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever")) self.failUnlessRaises(ValueError, before.extractXml, validate=True) def testExtractXml_036(self): """ Extract document containing only an invalid purge section, validate=False. """ before = Config() before.purge = PurgeConfig() before.purge.purgeDirs = [] before.purge.purgeDirs.append(PurgeDir(absolutePath="/whatever")) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_037(self): """ Extract complete document containing all required and optional fields, "index" extensions, validate=False. """ path = self.resources["cback.conf.15"] before = Config(xmlPath=path, validate=False) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_037a(self): """ Extract complete document containing all required and optional fields, "dependency" extensions, validate=False. """ path = self.resources["cback.conf.20"] before = Config(xmlPath=path, validate=False) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_038(self): """ Extract complete document containing all required and optional fields, "index" extensions, validate=True. """ path = self.resources["cback.conf.15"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.failUnlessEqual(before, after) def testExtractXml_038a(self): """ Extract complete document containing all required and optional fields, "dependency" extensions, validate=True. """ path = self.resources["cback.conf.20"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.failUnlessEqual(before, after) def testExtractXml_039(self): """ Extract a sample from Cedar Backup v1.x, which must still be valid, validate=False. """ path = self.resources["cback.conf.1"] before = Config(xmlPath=path, validate=False) beforeXml = before.extractXml(validate=False) after = Config(xmlData=beforeXml, validate=False) self.failUnlessEqual(before, after) def testExtractXml_040(self): """ Extract a sample from Cedar Backup v1.x, which must still be valid, validate=True. """ path = self.resources["cback.conf.1"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.failUnlessEqual(before, after) def testExtractXml_041(self): """ Extract complete document containing all required and optional fields, using a peers configuration section, validate=True. """ path = self.resources["cback.conf.21"] before = Config(xmlPath=path, validate=True) beforeXml = before.extractXml(validate=True) after = Config(xmlData=beforeXml, validate=True) self.failUnlessEqual(before, after) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestByteQuantity, 'test'), unittest.makeSuite(TestActionDependencies, 'test'), unittest.makeSuite(TestActionHook, 'test'), unittest.makeSuite(TestPreActionHook, 'test'), unittest.makeSuite(TestPostActionHook, 'test'), unittest.makeSuite(TestBlankBehavior, 'test'), unittest.makeSuite(TestExtendedAction, 'test'), unittest.makeSuite(TestCommandOverride, 'test'), unittest.makeSuite(TestCollectFile, 'test'), unittest.makeSuite(TestCollectDir, 'test'), unittest.makeSuite(TestPurgeDir, 'test'), unittest.makeSuite(TestLocalPeer, 'test'), unittest.makeSuite(TestRemotePeer, 'test'), unittest.makeSuite(TestReferenceConfig, 'test'), unittest.makeSuite(TestExtensionsConfig, 'test'), unittest.makeSuite(TestOptionsConfig, 'test'), unittest.makeSuite(TestPeersConfig, 'test'), unittest.makeSuite(TestCollectConfig, 'test'), unittest.makeSuite(TestStageConfig, 'test'), unittest.makeSuite(TestStoreConfig, 'test'), unittest.makeSuite(TestPurgeConfig, 'test'), unittest.makeSuite(TestConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/dvdwritertests.py0000664000175000017500000013635012560016766023061 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests DVD writer functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/writers/dvdwriter.py. Code Coverage ============= This module contains individual tests for the public classes implemented in dvdwriter.py. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to a physical DVD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to. Because of this, there aren't any tests below that actually cause DVD media to be written to. As a compromise, complicated parts of the implementation are in terms of private static methods with well-defined behaviors. Normally, I prefer to only test the public interface to class, but in this case, testing these few private methods will help give us some reasonable confidence in the code, even if we can't write a physical disc or can't run all of the tests. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. There are no special dependencies for these tests. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile from CedarBackup2.writers.dvdwriter import MediaDefinition, MediaCapacity, DvdWriter from CedarBackup2.writers.dvdwriter import MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar ####################################################################### # Module-wide configuration and constants ####################################################################### GB44 = (4.4*1024.0*1024.0*1024.0) # 4.4 GB GB44SECTORS = GB44/2048.0 # 4.4 GB in 2048-byte sectors DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree9.tar.gz", ] ####################################################################### # Test Case Classes ####################################################################### ############################ # TestMediaDefinition class ############################ class TestMediaDefinition(unittest.TestCase): """Tests for the MediaDefinition class.""" def testConstructor_001(self): """ Test the constructor with an invalid media type. """ self.failUnlessRaises(ValueError, MediaDefinition, 100) def testConstructor_002(self): """ Test the constructor with the C{MEDIA_DVDPLUSR} media type. """ media = MediaDefinition(MEDIA_DVDPLUSR) self.failUnlessEqual(MEDIA_DVDPLUSR, media.mediaType) self.failUnlessEqual(False, media.rewritable) self.failUnlessEqual(GB44SECTORS, media.capacity) def testConstructor_003(self): """ Test the constructor with the C{MEDIA_DVDPLUSRW} media type. """ media = MediaDefinition(MEDIA_DVDPLUSRW) self.failUnlessEqual(MEDIA_DVDPLUSRW, media.mediaType) self.failUnlessEqual(True, media.rewritable) self.failUnlessEqual(GB44SECTORS, media.capacity) ########################## # TestMediaCapacity class ########################## class TestMediaCapacity(unittest.TestCase): """Tests for the MediaCapacity class.""" def testConstructor_001(self): """ Test the constructor with valid, zero values """ capacity = MediaCapacity(0.0, 0.0) self.failUnlessEqual(0.0, capacity.bytesUsed) self.failUnlessEqual(0.0, capacity.bytesAvailable) def testConstructor_002(self): """ Test the constructor with valid, non-zero values. """ capacity = MediaCapacity(1.1, 2.2) self.failUnlessEqual(1.1, capacity.bytesUsed) self.failUnlessEqual(2.2, capacity.bytesAvailable) def testConstructor_003(self): """ Test the constructor with bytesUsed that is not a float. """ self.failUnlessRaises(ValueError, MediaCapacity, 0.0, "ken") def testConstructor_004(self): """ Test the constructor with bytesAvailable that is not a float. """ self.failUnlessRaises(ValueError, MediaCapacity, "a", 0.0) ###################### # TestDvdWriter class ###################### class TestDvdWriter(unittest.TestCase): """Tests for the DvdWriter class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): removedir(self.tmpdir) ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def getFileContents(self, resource): """Gets contents of named resource as a list of strings.""" path = self.resources[resource] return open(path).readlines() ################### # Test constructor ################### def testConstructor_001(self): """ Test with an empty device. """ self.failUnlessRaises(ValueError, DvdWriter, None) def testConstructor_002(self): """ Test with a device only. """ dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual(None, dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_003(self): """ Test with a device and valid SCSI id. """ dvdwriter = DvdWriter("/dev/dvd", scsiId="ATA:1,0,0", unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual("ATA:1,0,0", dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_004(self): """ Test with a device and valid drive speed. """ dvdwriter = DvdWriter("/dev/dvd", driveSpeed=3, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual(None, dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(3, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_005(self): """ Test with a device with media type MEDIA_DVDPLUSR. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSR, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual(None, dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_006(self): """ Test with a device with media type MEDIA_DVDPLUSRW. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSR, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual(None, dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_007(self): """ Test with a device and invalid SCSI id. """ dvdwriter = DvdWriter("/dev/dvd", scsiId="00000000", unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual("00000000", dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(None, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSRW, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_008(self): """ Test with a device and invalid drive speed. """ self.failUnlessRaises(ValueError, DvdWriter, "/dev/dvd", driveSpeed="KEN", unittest=True) def testConstructor_009(self): """ Test with a device and invalid media type. """ self.failUnlessRaises(ValueError, DvdWriter, "/dev/dvd", mediaType=999, unittest=True) def testConstructor_010(self): """ Test with all valid parameters, but no device, unittest=True. """ self.failUnlessRaises(ValueError, DvdWriter, None, "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=True) def testConstructor_011(self): """ Test with all valid parameters, but no device, unittest=False. """ self.failUnlessRaises(ValueError, DvdWriter, None, "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=False) def testConstructor_012(self): """ Test with all valid parameters, and an invalid device (not absolute path), unittest=True. """ self.failUnlessRaises(ValueError, DvdWriter, "dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=True) def testConstructor_013(self): """ Test with all valid parameters, and an invalid device (not absolute path), unittest=False. """ self.failUnlessRaises(ValueError, DvdWriter, "dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=False) def testConstructor_014(self): """ Test with all valid parameters, and an invalid device (path does not exist), unittest=False. """ self.failUnlessRaises(ValueError, DvdWriter, "/dev/bogus", "ATA:1,0,0", 1, MEDIA_DVDPLUSRW, unittest=False) def testConstructor_015(self): """ Test with all valid parameters. """ dvdwriter = DvdWriter("/dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSR, noEject=False, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual("ATA:1,0,0", dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(1, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.failUnlessEqual(True, dvdwriter.deviceHasTray) self.failUnlessEqual(True, dvdwriter.deviceCanEject) def testConstructor_016(self): """ Test with all valid parameters. """ dvdwriter = DvdWriter("/dev/dvd", "ATA:1,0,0", 1, MEDIA_DVDPLUSR, noEject=True, unittest=True) self.failUnlessEqual("/dev/dvd", dvdwriter.device) self.failUnlessEqual("ATA:1,0,0", dvdwriter.scsiId) self.failUnlessEqual("/dev/dvd", dvdwriter.hardwareId) self.failUnlessEqual(1, dvdwriter.driveSpeed) self.failUnlessEqual(MEDIA_DVDPLUSR, dvdwriter.media.mediaType) self.failUnlessEqual(False, dvdwriter.deviceHasTray) self.failUnlessEqual(False, dvdwriter.deviceCanEject) ###################### # Test isRewritable() ###################### def testIsRewritable_001(self): """ Test with MEDIA_DVDPLUSR. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSR, unittest=True) self.failUnlessEqual(False, dvdwriter.isRewritable()) def testIsRewritable_002(self): """ Test with MEDIA_DVDPLUSRW. """ dvdwriter = DvdWriter("/dev/dvd", mediaType=MEDIA_DVDPLUSRW, unittest=True) self.failUnlessEqual(True, dvdwriter.isRewritable()) ######################### # Test initializeImage() ######################### def testInitializeImage_001(self): """ Test with newDisc=False, tmpdir=None. """ dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) self.failUnlessEqual(False, dvdwriter._image.newDisc) self.failUnlessEqual(None, dvdwriter._image.tmpdir) self.failUnlessEqual({}, dvdwriter._image.entries) def testInitializeImage_002(self): """ Test with newDisc=True, tmpdir not None. """ dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(True, "/path/to/somewhere") self.failUnlessEqual(True, dvdwriter._image.newDisc) self.failUnlessEqual("/path/to/somewhere", dvdwriter._image.tmpdir) self.failUnlessEqual({}, dvdwriter._image.entries) ####################### # Test addImageEntry() ####################### def testAddImageEntry_001(self): """ Add a valid path with no graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, None) def testAddImageEntry_002(self): """ Add a valid path with a graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, "ken") def testAddImageEntry_003(self): """ Add a non-existent path with no graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.failIf(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, None) def testAddImageEntry_004(self): """ Add a non-existent path with a graft point, before calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.failIf(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, "ken") def testAddImageEntry_005(self): """ Add a valid path with no graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path, None) self.failUnlessEqual({ path:None, }, dvdwriter._image.entries) def testAddImageEntry_006(self): """ Add a valid path with a graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) def testAddImageEntry_007(self): """ Add a non-existent path with no graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.failIf(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, None) def testAddImageEntry_008(self): """ Add a non-existent path with a graft point, after calling initializeImage(). """ self.extractTar("tree9") path = self.buildPath([ "tree9", "bogus", ]) self.failIf(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) self.failUnlessRaises(ValueError, dvdwriter.addImageEntry, path, "ken") def testAddImageEntry_009(self): """ Add the same path several times. """ self.extractTar("tree9") path = self.buildPath([ "tree9", "dir002", ]) self.failUnless(os.path.exists(path)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path, "ken") self.failUnlessEqual({ path:"ken", }, dvdwriter._image.entries) def testAddImageEntry_010(self): """ Add several paths. """ self.extractTar("tree9") path1 = self.buildPath([ "tree9", "dir001", ]) path2 = self.buildPath([ "tree9", "dir002", ]) path3 = self.buildPath([ "tree9", "dir001", "dir001", ]) self.failUnless(os.path.exists(path1)) self.failUnless(os.path.exists(path2)) self.failUnless(os.path.exists(path3)) dvdwriter = DvdWriter("/dev/dvd", unittest=True) dvdwriter.initializeImage(False, None) dvdwriter.addImageEntry(path1, None) self.failUnlessEqual({ path1:None, }, dvdwriter._image.entries) dvdwriter.addImageEntry(path2, "ken") self.failUnlessEqual({ path1:None, path2:"ken", }, dvdwriter._image.entries) dvdwriter.addImageEntry(path3, "another") self.failUnlessEqual({ path1:None, path2:"ken", path3:"another", }, dvdwriter._image.entries) ############################ # Test _searchForOverburn() ############################ def testSearchForOverburn_001(self): """ Test with output=None. """ output = None DvdWriter._searchForOverburn(output) # no exception should be thrown def testSearchForOverburn_002(self): """ Test with output=[]. """ output = [] DvdWriter._searchForOverburn(output) # no exception should be thrown def testSearchForOverburn_003(self): """ Test with one-line output, not containing the pattern. """ output = [ "This line does not contain the pattern", ] DvdWriter._searchForOverburn(output) # no exception should be thrown output = [ ":-( /dev/cdrom: blocks are free, to be written!", ] DvdWriter._searchForOverburn(output) # no exception should be thrown output = [ ":-) /dev/cdrom: 89048 blocks are free, 2033746 to be written!", ] DvdWriter._searchForOverburn(output) # no exception should be thrown output = [ ":-( /dev/cdrom: 894048blocks are free, 2033746to be written!", ] DvdWriter._searchForOverburn(output) # no exception should be thrown def testSearchForOverburn_004(self): """ Test with one-line output(s), containing the pattern. """ output = [ ":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/cdrom: XXXX blocks are free, XXXX to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/cdrom: 1 blocks are free, 1 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/cdrom: 0 blocks are free, 0 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/dvd: 0 blocks are free, 0 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( /dev/writer: 0 blocks are free, 0 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) output = [ ":-( bogus: 0 blocks are free, 0 to be written!", ] self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_005(self): """ Test with multi-line output, not containing the pattern. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") DvdWriter._searchForOverburn(output) # no exception should be thrown") def testSearchForOverburn_006(self): """ Test with multi-line output, containing the pattern at the top. """ output = [] output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_007(self): """ Test with multi-line output, containing the pattern at the bottom. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_008(self): """ Test with multi-line output, containing the pattern in the middle. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) def testSearchForOverburn_009(self): """ Test with multi-line output, containing the pattern several times. """ output = [] output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Rock Ridge signatures found") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") output.append(":-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!") self.failUnlessRaises(IOError, DvdWriter._searchForOverburn, output) ########################### # Test _parseSectorsUsed() ########################### def testParseSectorsUsed_001(self): """ Test with output=None. """ output = None sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(0.0, sectorsUsed) def testParseSectorsUsed_002(self): """ Test with output=[]. """ output = [] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(0.0, sectorsUsed) def testParseSectorsUsed_003(self): """ Test with one-line output, not containing the pattern. """ output = [ "This line does not contain the pattern", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(0.0, sectorsUsed) def testParseSectorsUsed_004(self): """ Test with one-line output(s), containing the pattern. """ output = [ "'seek=10'", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(10.0*16.0, sectorsUsed) output = [ "' seek= 10 '", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(10.0*16.0, sectorsUsed) output = [ "Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'", ] sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(87566*16.0, sectorsUsed) def testParseSectorsUsed_005(self): """ Test with real growisofs output. """ output = [] output.append("Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'") output.append("Rock Ridge signatures found") output.append("Using THE_K000 for music4/The_Kings_Singers (The_Kingston_Trio)") output.append("Using COCKT000 for music/Various_Artists/Cocktail_Classics_-_Beethovens_Fifth_and_Others (Cocktail_Classics_-_Pachelbels_Canon_and_Others)") output.append("Using THE_V000 for music/Brahms/The_Violin_Sonatas (The_Viola_Sonatas) Using COMPL000 for music/Gershwin/Complete_Gershwin_2 (Complete_Gershwin_1)") output.append("Using SELEC000.MP3;1 for music/Marquette_Chorus/Selected_Christmas_Carols_For_Double_Choir.mp3 (Selected_Choruses_from_The_Lark.mp3)") output.append("Using SELEC001.MP3;1 for music/Marquette_Chorus/Selected_Choruses_from_The_Lark.mp3 (Selected_Choruses_from_Messiah.mp3)") output.append("Using IN_TH000.MP3;1 for music/Marquette_Chorus/In_the_Bleak_Midwinter.mp3 (In_the_Beginning.mp3) Using AFRIC000.MP3;1 for music/Marquette_Chorus/African_Noel-tb.mp3 (African_Noel-satb.mp3)") sectorsUsed = DvdWriter._parseSectorsUsed(output) self.failUnlessEqual(87566*16.0, sectorsUsed) ######################### # Test _buildWriteArgs() ######################### def testBuildWriteArgs_001(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=None, mediaLabel=None,dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = None mediaLabel = None dryRun = False self.failUnlessRaises(ValueError, DvdWriter._buildWriteArgs, newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) def testBuildWriteArgs_002(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=None, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = None mediaLabel = None dryRun = True self.failUnlessRaises(ValueError, DvdWriter._buildWriteArgs, newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) def testBuildWriteArgs_003(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_004(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_005(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_006(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_007(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=1, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 1 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=1", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_008(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=2, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 2 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=2", "-M", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_009(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=3, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 3 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=3", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_010(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=4, imagePath="/path/to/image", entries=None, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 4 imagePath = "/path/to/image" entries = None mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=4", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_011(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-M", "/dev/dvd", "-r", "-graft-points", "path1", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_012(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-M", "/dev/dvd", "-r", "-graft-points", "path1", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_013(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, "path2":"graft2", "path3":"/path/to/graft3", } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-Z", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", "path/to/graft3/=path3", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_014(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=None, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = None imagePath = None entries = { "path1":None, "path2":"graft2", "path3":"/path/to/graft3", } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-Z", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", "path/to/graft3/=path3", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_015(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=1, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 1 imagePath = None entries = { "path1":None, "path2":"graft2", } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=1", "-M", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_016(self): """ Test with newDisc=False, hardwareId="/dev/dvd", driveSpeed=2, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = False hardwareId = "/dev/dvd" driveSpeed = 2 imagePath = None entries = { "path1":None, "path2":"graft2", } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=2", "-M", "/dev/dvd", "-r", "-graft-points", "path1", "graft2/=path2", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_017(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=3, imagePath=None, entries=, mediaLabel=None, dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 3 imagePath = None entries = { "path1":None, "/path/to/path2":None, "/path/to/path3/":"/path/to/graft3/", } mediaLabel = None dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=3", "-Z", "/dev/dvd", "-r", "-graft-points", "/path/to/path2", "path/to/graft3/=/path/to/path3/", "path1", ] # sorted order actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_018(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=4, imagePath=None, entries=, mediaLabel=None, dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 4 imagePath = None entries = { "path1":None, "/path/to/path2":None, "/path/to/path3/":"/path/to/graft3/", } mediaLabel = None dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=4", "-Z", "/dev/dvd", "-r", "-graft-points", "/path/to/path2", "path/to/graft3/=/path/to/path3/", "path1", ] # sorted order actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_019(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=3, imagePath="/path/to/image", entries=None, mediaLabel="BACKUP", dryRun=False. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 3 imagePath = "/path/to/image" entries = None mediaLabel = "BACKUP" dryRun = False expected = [ "-use-the-force-luke=tty", "-speed=3", "-Z", "/dev/dvd=/path/to/image", ] actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) def testBuildWriteArgs_020(self): """ Test with newDisc=True, hardwareId="/dev/dvd", driveSpeed=4, imagePath=None, entries=, mediaLabel="BACKUP", dryRun=True. """ newDisc = True hardwareId = "/dev/dvd" driveSpeed = 4 imagePath = None entries = { "path1":None, "/path/to/path2":None, "/path/to/path3/":"/path/to/graft3/", } mediaLabel = "BACKUP" dryRun = True expected = [ "-use-the-force-luke=tty", "-dry-run", "-speed=4", "-Z", "/dev/dvd", "-V", "BACKUP", "-r", "-graft-points", "/path/to/path2", "path/to/graft3/=/path/to/path3/", "path1", ] # sorted order actual = DvdWriter._buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel, dryRun) self.failUnlessEqual(actual, expected) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestMediaDefinition, 'test'), unittest.makeSuite(TestMediaCapacity, 'test'), unittest.makeSuite(TestDvdWriter, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/capacitytests.py0000664000175000017500000010007112560016766022633 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests capacity extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/capacity.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/capacity.py. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a CAPACITYTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.util import UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.testutil import hexFloatLiteralAllowed, findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.capacity import LocalConfig, CapacityConfig, ByteQuantity, PercentageQuantity ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "capacity.conf.1", "capacity.conf.2", "capacity.conf.3", "capacity.conf.4", ] ####################################################################### # Test Case Classes ####################################################################### ############################### # TestPercentageQuantity class ############################### class TestPercentageQuantity(unittest.TestCase): """Tests for the PercentageQuantity class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = PercentageQuantity() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.percentage) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ quantity = PercentageQuantity("6") self.failUnlessEqual("6", quantity.quantity) self.failUnlessEqual(6.0, quantity.percentage) def testConstructor_003(self): """ Test assignment of quantity attribute, None value. """ quantity = PercentageQuantity(quantity="1.0") self.failUnlessEqual("1.0", quantity.quantity) self.failUnlessEqual(1.0, quantity.percentage) quantity.quantity = None self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.percentage) def testConstructor_004(self): """ Test assignment of quantity attribute, valid values. """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessEqual(0.0, quantity.percentage) quantity.quantity = "1.0" self.failUnlessEqual("1.0", quantity.quantity) self.failUnlessEqual(1.0, quantity.percentage) quantity.quantity = ".1" self.failUnlessEqual(".1", quantity.quantity) self.failUnlessEqual(0.1, quantity.percentage) quantity.quantity = "12" self.failUnlessEqual("12", quantity.quantity) self.failUnlessEqual(12.0, quantity.percentage) quantity.quantity = "0.5" self.failUnlessEqual("0.5", quantity.quantity) self.failUnlessEqual(0.5, quantity.percentage) quantity.quantity = "0.25E2" self.failUnlessEqual("0.25E2", quantity.quantity) self.failUnlessEqual(0.25e2, quantity.percentage) if hexFloatLiteralAllowed(): # Some interpreters allow this, some don't quantity.quantity = "0x0C" self.failUnlessEqual("0x0C", quantity.quantity) self.failUnlessEqual(12.0, quantity.percentage) def testConstructor_005(self): """ Test assignment of quantity attribute, invalid value (empty). """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "") self.failUnlessEqual(None, quantity.quantity) def testConstructor_006(self): """ Test assignment of quantity attribute, invalid value (not a floating point number). """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "blech") self.failUnlessEqual(None, quantity.quantity) def testConstructor_007(self): """ Test assignment of quantity attribute, invalid value (negative number). """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-3") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-6.8") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-0.2") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "-.1") self.failUnlessEqual(None, quantity.quantity) def testConstructor_008(self): """ Test assignment of quantity attribute, invalid value (larger than 100%). """ quantity = PercentageQuantity() self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "100.0001") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "101") self.failUnlessEqual(None, quantity.quantity) self.failUnlessAssignRaises(ValueError, quantity, "quantity", "1e6") self.failUnlessEqual(None, quantity.quantity) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ quantity1 = PercentageQuantity() quantity2 = PercentageQuantity() self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ quantity1 = PercentageQuantity("12") quantity2 = PercentageQuantity("12") self.failUnlessEqual(quantity1, quantity2) self.failUnless(quantity1 == quantity2) self.failUnless(not quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(quantity1 >= quantity2) self.failUnless(not quantity1 != quantity2) def testComparison_003(self): """ Test comparison of two differing objects, quantity differs (one None). """ quantity1 = PercentageQuantity() quantity2 = PercentageQuantity(quantity="12") self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) def testComparison_004(self): """ Test comparison of two differing objects, quantity differs. """ quantity1 = PercentageQuantity("10") quantity2 = PercentageQuantity("12") self.failIfEqual(quantity1, quantity2) self.failUnless(not quantity1 == quantity2) self.failUnless(quantity1 < quantity2) self.failUnless(quantity1 <= quantity2) self.failUnless(not quantity1 > quantity2) self.failUnless(not quantity1 >= quantity2) self.failUnless(quantity1 != quantity2) ########################## # TestCapacityConfig class ########################## class TestCapacityConfig(unittest.TestCase): """Tests for the CapacityConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = CapacityConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.maxPercentage) self.failUnlessEqual(None, capacity.minBytes) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ capacity = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("2.0", UNIT_KBYTES)) self.failUnlessEqual(PercentageQuantity("63.2"), capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("2.0", UNIT_KBYTES), capacity.minBytes) def testConstructor_003(self): """ Test assignment of maxPercentage attribute, None value. """ capacity = CapacityConfig(maxPercentage=PercentageQuantity("63.2")) self.failUnlessEqual(PercentageQuantity("63.2"), capacity.maxPercentage) capacity.maxPercentage = None self.failUnlessEqual(None, capacity.maxPercentage) def testConstructor_004(self): """ Test assignment of maxPercentage attribute, valid value. """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.maxPercentage) capacity.maxPercentage = PercentageQuantity("63.2") self.failUnlessEqual(PercentageQuantity("63.2"), capacity.maxPercentage) def testConstructor_005(self): """ Test assignment of maxPercentage attribute, invalid value (empty). """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.maxPercentage) self.failUnlessAssignRaises(ValueError, capacity, "maxPercentage", "") self.failUnlessEqual(None, capacity.maxPercentage) def testConstructor_006(self): """ Test assignment of maxPercentage attribute, invalid value (not a PercentageQuantity). """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.maxPercentage) self.failUnlessAssignRaises(ValueError, capacity, "maxPercentage", "1.0 GB") self.failUnlessEqual(None, capacity.maxPercentage) def testConstructor_007(self): """ Test assignment of minBytes attribute, None value. """ capacity = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_KBYTES)) self.failUnlessEqual(ByteQuantity("1.00", UNIT_KBYTES), capacity.minBytes) capacity.minBytes = None self.failUnlessEqual(None, capacity.minBytes) def testConstructor_008(self): """ Test assignment of minBytes attribute, valid value. """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.minBytes) capacity.minBytes = ByteQuantity("1.00", UNIT_KBYTES) self.failUnlessEqual(ByteQuantity("1.00", UNIT_KBYTES), capacity.minBytes) def testConstructor_009(self): """ Test assignment of minBytes attribute, invalid value (empty). """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.minBytes) self.failUnlessAssignRaises(ValueError, capacity, "minBytes", "") self.failUnlessEqual(None, capacity.minBytes) def testConstructor_010(self): """ Test assignment of minBytes attribute, invalid value (not a ByteQuantity). """ capacity = CapacityConfig() self.failUnlessEqual(None, capacity.minBytes) self.failUnlessAssignRaises(ValueError, capacity, "minBytes", 12) self.failUnlessEqual(None, capacity.minBytes) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ capacity1 = CapacityConfig() capacity2 = CapacityConfig() self.failUnlessEqual(capacity1, capacity2) self.failUnless(capacity1 == capacity2) self.failUnless(not capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(capacity1 >= capacity2) self.failUnless(not capacity1 != capacity2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ capacity1 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) capacity2 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.failUnlessEqual(capacity1, capacity2) self.failUnless(capacity1 == capacity2) self.failUnless(not capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(capacity1 >= capacity2) self.failUnless(not capacity1 != capacity2) def testComparison_003(self): """ Test comparison of two differing objects, maxPercentage differs (one None). """ capacity1 = CapacityConfig() capacity2 = CapacityConfig(maxPercentage=PercentageQuantity("63.2")) self.failIfEqual(capacity1, capacity2) self.failUnless(not capacity1 == capacity2) self.failUnless(capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(not capacity1 >= capacity2) self.failUnless(capacity1 != capacity2) def testComparison_004(self): """ Test comparison of two differing objects, maxPercentage differs. """ capacity1 = CapacityConfig(PercentageQuantity("15.0"), ByteQuantity("1.00", UNIT_MBYTES)) capacity2 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(capacity1, capacity2) self.failUnless(not capacity1 == capacity2) self.failUnless(capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(not capacity1 >= capacity2) self.failUnless(capacity1 != capacity2) def testComparison_005(self): """ Test comparison of two differing objects, minBytes differs (one None). """ capacity1 = CapacityConfig() capacity2 = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(capacity1, capacity2) self.failUnless(not capacity1 == capacity2) self.failUnless(capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(not capacity1 >= capacity2) self.failUnless(capacity1 != capacity2) def testComparison_006(self): """ Test comparison of two differing objects, minBytes differs. """ capacity1 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("0.5", UNIT_MBYTES)) capacity2 = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(capacity1, capacity2) self.failUnless(not capacity1 == capacity2) self.failUnless(capacity1 < capacity2) self.failUnless(capacity1 <= capacity2) self.failUnless(not capacity1 > capacity2) self.failUnless(not capacity1 >= capacity2) self.failUnless(capacity1 != capacity2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the capacity configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.capacity) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.capacity) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["capacity.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of capacity attribute, None value. """ config = LocalConfig() config.capacity = None self.failUnlessEqual(None, config.capacity) def testConstructor_005(self): """ Test assignment of capacity attribute, valid value. """ config = LocalConfig() config.capacity = CapacityConfig() self.failUnlessEqual(CapacityConfig(), config.capacity) def testConstructor_006(self): """ Test assignment of capacity attribute, invalid value (not CapacityConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "capacity", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.capacity = CapacityConfig() config2 = LocalConfig() config2.capacity = CapacityConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, capacity differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.capacity = CapacityConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, capacity differs. """ config1 = LocalConfig() config1.capacity = CapacityConfig(minBytes=ByteQuantity("0.1", UNIT_MBYTES)) config2 = LocalConfig() config2.capacity = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_MBYTES)) self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None capacity section. """ config = LocalConfig() config.capacity = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty capacity section. """ config = LocalConfig() config.capacity = CapacityConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty capacity section with no values filled in. """ config = LocalConfig() config.capacity = CapacityConfig(None, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty capacity section with both max percentage and min bytes filled in. """ config = LocalConfig() config.capacity = CapacityConfig(PercentageQuantity("63.2"), ByteQuantity("1.00", UNIT_MBYTES)) self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty capacity section with only max percentage filled in. """ config = LocalConfig() config.capacity = CapacityConfig(maxPercentage=PercentageQuantity("63.2")) config.validate() def testValidate_006(self): """ Test validate on a non-empty capacity section with only min bytes filled in. """ config = LocalConfig() config.capacity = CapacityConfig(minBytes=ByteQuantity("1.00", UNIT_MBYTES)) config.validate() ############################ # Test parsing of documents ############################ # Some of the byte-size parsing logic is tested more fully in splittests.py. # I decided not to duplicate it here, since it's shared from config.py. def testParse_001(self): """ Parse empty config document. """ path = self.resources["capacity.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.capacity) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.capacity) def testParse_002(self): """ Parse config document that configures max percentage. """ path = self.resources["capacity.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(PercentageQuantity("63.2"), config.capacity.maxPercentage) self.failUnlessEqual(None, config.capacity.minBytes) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(PercentageQuantity("63.2"), config.capacity.maxPercentage) self.failUnlessEqual(None, config.capacity.minBytes) def testParse_003(self): """ Parse config document that configures min bytes, size in bytes. """ path = self.resources["capacity.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(None, config.capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("18", UNIT_BYTES), config.capacity.minBytes) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(None, config.capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("18", UNIT_BYTES), config.capacity.minBytes) def testParse_004(self): """ Parse config document with filled-in values, size in KB. """ path = self.resources["capacity.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(None, config.capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("1.25", UNIT_KBYTES), config.capacity.minBytes) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.capacity) self.failUnlessEqual(None, config.capacity.maxPercentage) self.failUnlessEqual(ByteQuantity("1.25", UNIT_KBYTES), config.capacity.minBytes) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ capacity = CapacityConfig() config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_002(self): """ Test with max percentage value set. """ capacity = CapacityConfig(maxPercentage=PercentageQuantity("63.29128310980123")) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_003(self): """ Test with min bytes value set, byte values. """ capacity = CapacityConfig(minBytes=ByteQuantity("121231", UNIT_BYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_004(self): """ Test with min bytes value set, KB values. """ capacity = CapacityConfig(minBytes=ByteQuantity("63352", UNIT_KBYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_005(self): """ Test with min bytes value set, MB values. """ capacity = CapacityConfig(minBytes=ByteQuantity("63352", UNIT_MBYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) def testAddConfig_006(self): """ Test with min bytes value set, GB values. """ capacity = CapacityConfig(minBytes=ByteQuantity("63352", UNIT_GBYTES)) config = LocalConfig() config.capacity = capacity self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestPercentageQuantity, 'test'), unittest.makeSuite(TestCapacityConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/actionsutiltests.py0000664000175000017500000002337012560016766023402 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests action utility functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/actions/util.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in actions/util.py. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a ACTIONSUTILTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile from CedarBackup2.extend.encrypt import ENCRYPT_INDICATOR ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree1.tar.gz", "tree8.tar.gz", "tree15.tar.gz", "tree17.tar.gz", "tree18.tar.gz", "tree19.tar.gz", "tree20.tar.gz", ] INVALID_PATH = "bogus" # This path name should never exist ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ####################### # Test findDailyDirs() ####################### def testFindDailyDirs_001(self): """ Test with a nonexistent staging directory. """ stagingDir = self.buildPath([INVALID_PATH]) self.failUnlessRaises(ValueError, findDailyDirs, stagingDir, ENCRYPT_INDICATOR) def testFindDailyDirs_002(self): """ Test with an empty staging directory. """ self.extractTar("tree8") stagingDir = self.buildPath(["tree8", "dir001", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual([], dailyDirs) def testFindDailyDirs_003(self): """ Test with a staging directory containing only files. """ self.extractTar("tree1") stagingDir = self.buildPath(["tree1", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual([], dailyDirs) def testFindDailyDirs_004(self): """ Test with a staging directory containing only links. """ self.extractTar("tree15") stagingDir = self.buildPath(["tree15", "dir001", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual([], dailyDirs) def testFindDailyDirs_005(self): """ Test with a valid staging directory, where the daily directories do NOT contain the encrypt indicator. """ self.extractTar("tree17") stagingDir = self.buildPath(["tree17" ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual(6, len(dailyDirs)) self.failUnless(self.buildPath([ "tree17", "2006", "12", "29", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2006", "12", "30", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2006", "12", "31", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2007", "01", "01", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2007", "01", "02", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree17", "2007", "01", "03", ]) in dailyDirs) def testFindDailyDirs_006(self): """ Test with a valid staging directory, where the daily directories DO contain the encrypt indicator. """ self.extractTar("tree18") stagingDir = self.buildPath(["tree18" ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual([], dailyDirs) def testFindDailyDirs_007(self): """ Test with a valid staging directory, where some daily directories contain the encrypt indicator and others do not. """ self.extractTar("tree19") stagingDir = self.buildPath(["tree19" ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual(3, len(dailyDirs)) self.failUnless(self.buildPath([ "tree19", "2006", "12", "30", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree19", "2007", "01", "01", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree19", "2007", "01", "03", ]) in dailyDirs) def testFindDailyDirs_008(self): """ Test for case where directories other than daily directories contain the encrypt indicator (the indicator should be ignored). """ self.extractTar("tree20") stagingDir = self.buildPath(["tree20", ]) dailyDirs = findDailyDirs(stagingDir, ENCRYPT_INDICATOR) self.failUnlessEqual(6, len(dailyDirs)) self.failUnless(self.buildPath([ "tree20", "2006", "12", "29", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2006", "12", "30", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2006", "12", "31", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2007", "01", "01", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2007", "01", "02", ]) in dailyDirs) self.failUnless(self.buildPath([ "tree20", "2007", "01", "03", ]) in dailyDirs) ############################ # Test writeIndicatorFile() ############################ def testWriteIndicatorFile_001(self): """ Test with a nonexistent staging directory. """ stagingDir = self.buildPath([INVALID_PATH]) self.failUnlessRaises(IOError, writeIndicatorFile, stagingDir, ENCRYPT_INDICATOR, None, None) def testWriteIndicatorFile_002(self): """ Test with a valid staging directory. """ self.extractTar("tree8") stagingDir = self.buildPath(["tree8", "dir001", ]) writeIndicatorFile(stagingDir, ENCRYPT_INDICATOR, None, None) self.failUnless(os.path.exists(self.buildPath(["tree8", "dir001", ENCRYPT_INDICATOR, ]))) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/__init__.py0000664000175000017500000000145512560016766021520 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Provides package initialization. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Package initialization ######################################################################## """ This causes the test directory to be a package. """ __all__ = [ ] CedarBackup2-2.26.5/testcase/clitests.py0000664000175000017500000235320112560016766021614 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2007,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests command-line interface functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/cli.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in cli.py. Where possible, we test functions that print output by passing a custom file descriptor. Sometimes, we only ensure that a function or method runs without failure, and we don't validate what its result is or what it prints out. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a CLITESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from os.path import isdir, isfile, islink, isabs, exists from getopt import GetoptError from CedarBackup2.testutil import failUnlessAssignRaises, captureOutput from CedarBackup2.config import OptionsConfig, PeersConfig, ExtensionsConfig from CedarBackup2.config import LocalPeer, RemotePeer from CedarBackup2.config import ExtendedAction, ActionDependencies, PreActionHook, PostActionHook from CedarBackup2.cli import _usage, _version, _diagnostics from CedarBackup2.cli import Options from CedarBackup2.cli import _ActionSet from CedarBackup2.action import executeCollect, executeStage, executeStore, executePurge, executeRebuild, executeValidate ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test simple functions ######################## def testSimpleFuncs_001(self): """ Test that the _usage() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_usage) def testSimpleFuncs_002(self): """ Test that the _version() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_version) def testSimpleFuncs_003(self): """ Test that the _diagnostics() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_diagnostics) #################### # TestOptions class #################### class TestOptions(unittest.TestCase): """Tests for the Options class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Options() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no arguments. """ options = Options() self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_002(self): """ Test constructor with validate=False, no other arguments. """ options = Options(validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_003(self): """ Test constructor with argumentList=[], validate=False. """ options = Options(argumentList=[], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_004(self): """ Test constructor with argumentString="", validate=False. """ options = Options(argumentString="", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_005(self): """ Test constructor with argumentList=["--help", ], validate=False. """ options = Options(argumentList=["--help", ], validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_006(self): """ Test constructor with argumentString="--help", validate=False. """ options = Options(argumentString="--help", validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_007(self): """ Test constructor with argumentList=["-h", ], validate=False. """ options = Options(argumentList=["-h", ], validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_008(self): """ Test constructor with argumentString="-h", validate=False. """ options = Options(argumentString="-h", validate=False) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_009(self): """ Test constructor with argumentList=["--version", ], validate=False. """ options = Options(argumentList=["--version", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_010(self): """ Test constructor with argumentString="--version", validate=False. """ options = Options(argumentString="--version", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_011(self): """ Test constructor with argumentList=["-V", ], validate=False. """ options = Options(argumentList=["-V", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_012(self): """ Test constructor with argumentString="-V", validate=False. """ options = Options(argumentString="-V", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_013(self): """ Test constructor with argumentList=["--verbose", ], validate=False. """ options = Options(argumentList=["--verbose", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_014(self): """ Test constructor with argumentString="--verbose", validate=False. """ options = Options(argumentString="--verbose", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_015(self): """ Test constructor with argumentList=["-b", ], validate=False. """ options = Options(argumentList=["-b", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_016(self): """ Test constructor with argumentString="-b", validate=False. """ options = Options(argumentString="-b", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_017(self): """ Test constructor with argumentList=["--quiet", ], validate=False. """ options = Options(argumentList=["--quiet", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_018(self): """ Test constructor with argumentString="--quiet", validate=False. """ options = Options(argumentString="--quiet", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_019(self): """ Test constructor with argumentList=["-q", ], validate=False. """ options = Options(argumentList=["-q", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_020(self): """ Test constructor with argumentString="-q", validate=False. """ options = Options(argumentString="-q", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(True, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_021(self): """ Test constructor with argumentList=["--config", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--config", ], validate=False) def testConstructor_022(self): """ Test constructor with argumentString="--config", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--config", validate=False) def testConstructor_023(self): """ Test constructor with argumentList=["-c", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-c", ], validate=False) def testConstructor_024(self): """ Test constructor with argumentString="-c", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-c", validate=False) def testConstructor_025(self): """ Test constructor with argumentList=["--config", "something", ], validate=False. """ options = Options(argumentList=["--config", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_026(self): """ Test constructor with argumentString="--config something", validate=False. """ options = Options(argumentString="--config something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_027(self): """ Test constructor with argumentList=["-c", "something", ], validate=False. """ options = Options(argumentList=["-c", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_028(self): """ Test constructor with argumentString="-c something", validate=False. """ options = Options(argumentString="-c something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual("something", options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_029(self): """ Test constructor with argumentList=["--full", ], validate=False. """ options = Options(argumentList=["--full", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(True, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_030(self): """ Test constructor with argumentString="--full", validate=False. """ options = Options(argumentString="--full", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(True, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_031(self): """ Test constructor with argumentList=["-f", ], validate=False. """ options = Options(argumentList=["-f", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(True, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_032(self): """ Test constructor with argumentString="-f", validate=False. """ options = Options(argumentString="-f", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(True, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_033(self): """ Test constructor with argumentList=["--logfile", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--logfile", ], validate=False) def testConstructor_034(self): """ Test constructor with argumentString="--logfile", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--logfile", validate=False) def testConstructor_035(self): """ Test constructor with argumentList=["-l", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-l", ], validate=False) def testConstructor_036(self): """ Test constructor with argumentString="-l", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-l", validate=False) def testConstructor_037(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=False. """ options = Options(argumentList=["--logfile", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_038(self): """ Test constructor with argumentString="--logfile something", validate=False. """ options = Options(argumentString="--logfile something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_039(self): """ Test constructor with argumentList=["-l", "something", ], validate=False. """ options = Options(argumentList=["-l", "something", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_040(self): """ Test constructor with argumentString="-l something", validate=False. """ options = Options(argumentString="-l something", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual("something", options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_041(self): """ Test constructor with argumentList=["--owner", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--owner", ], validate=False) def testConstructor_042(self): """ Test constructor with argumentString="--owner", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--owner", validate=False) def testConstructor_043(self): """ Test constructor with argumentList=["-o", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-o", ], validate=False) def testConstructor_044(self): """ Test constructor with argumentString="-o", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-o", validate=False) def testConstructor_045(self): """ Test constructor with argumentList=["--owner", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=False) def testConstructor_046(self): """ Test constructor with argumentString="--owner something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="--owner something", validate=False) def testConstructor_047(self): """ Test constructor with argumentList=["-o", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["-o", "something", ], validate=False) def testConstructor_048(self): """ Test constructor with argumentString="-o something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="-o something", validate=False) def testConstructor_049(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=False. """ options = Options(argumentList=["--owner", "a:b", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_050(self): """ Test constructor with argumentString="--owner a:b", validate=False. """ options = Options(argumentString="--owner a:b", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_051(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=False. """ options = Options(argumentList=["-o", "a:b", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_052(self): """ Test constructor with argumentString="-o a:b", validate=False. """ options = Options(argumentString="-o a:b", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(("a", "b"), options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_053(self): """ Test constructor with argumentList=["--mode", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--mode", ], validate=False) def testConstructor_054(self): """ Test constructor with argumentString="--mode", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="--mode", validate=False) def testConstructor_055(self): """ Test constructor with argumentList=["-m", ], validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-m", ], validate=False) def testConstructor_056(self): """ Test constructor with argumentString="-m", validate=False. """ self.failUnlessRaises(GetoptError, Options, argumentString="-m", validate=False) def testConstructor_057(self): """ Test constructor with argumentList=["--mode", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=False) def testConstructor_058(self): """ Test constructor with argumentString="--mode something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="--mode something", validate=False) def testConstructor_059(self): """ Test constructor with argumentList=["-m", "something", ], validate=False. """ self.failUnlessRaises(ValueError, Options, argumentList=["-m", "something", ], validate=False) def testConstructor_060(self): """ Test constructor with argumentString="-m something", validate=False. """ self.failUnlessRaises(ValueError, Options, argumentString="-m something", validate=False) def testConstructor_061(self): """ Test constructor with argumentList=["--mode", "631", ], validate=False. """ options = Options(argumentList=["--mode", "631", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_062(self): """ Test constructor with argumentString="--mode 631", validate=False. """ options = Options(argumentString="--mode 631", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_063(self): """ Test constructor with argumentList=["-m", "631", ], validate=False. """ options = Options(argumentList=["-m", "631", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_064(self): """ Test constructor with argumentString="-m 631", validate=False. """ options = Options(argumentString="-m 631", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0631, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_065(self): """ Test constructor with argumentList=["--output", ], validate=False. """ options = Options(argumentList=["--output", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_066(self): """ Test constructor with argumentString="--output", validate=False. """ options = Options(argumentString="--output", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_067(self): """ Test constructor with argumentList=["-O", ], validate=False. """ options = Options(argumentList=["-O", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_068(self): """ Test constructor with argumentString="-O", validate=False. """ options = Options(argumentString="-O", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_069(self): """ Test constructor with argumentList=["--debug", ], validate=False. """ options = Options(argumentList=["--debug", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_070(self): """ Test constructor with argumentString="--debug", validate=False. """ options = Options(argumentString="--debug", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_071(self): """ Test constructor with argumentList=["-d", ], validate=False. """ options = Options(argumentList=["-d", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_072(self): """ Test constructor with argumentString="-d", validate=False. """ options = Options(argumentString="-d", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_073(self): """ Test constructor with argumentList=["--stack", ], validate=False. """ options = Options(argumentList=["--stack", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_074(self): """ Test constructor with argumentString="--stack", validate=False. """ options = Options(argumentString="--stack", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_075(self): """ Test constructor with argumentList=["-s", ], validate=False. """ options = Options(argumentList=["-s", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual([], options.actions) def testConstructor_076(self): """ Test constructor with argumentString="-s", validate=False. """ options = Options(argumentString="-s", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(True, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_077(self): """ Test constructor with argumentList=["all", ], validate=False. """ options = Options(argumentList=["all", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["all", ], options.actions) def testConstructor_078(self): """ Test constructor with argumentString="all", validate=False. """ options = Options(argumentString="all", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["all", ], options.actions) def testConstructor_079(self): """ Test constructor with argumentList=["collect", ], validate=False. """ options = Options(argumentList=["collect", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", ], options.actions) def testConstructor_080(self): """ Test constructor with argumentString="collect", validate=False. """ options = Options(argumentString="collect", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", ], options.actions) def testConstructor_081(self): """ Test constructor with argumentList=["stage", ], validate=False. """ options = Options(argumentList=["stage", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["stage", ], options.actions) def testConstructor_082(self): """ Test constructor with argumentString="stage", validate=False. """ options = Options(argumentString="stage", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["stage", ], options.actions) def testConstructor_083(self): """ Test constructor with argumentList=["store", ], validate=False. """ options = Options(argumentList=["store", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["store", ], options.actions) def testConstructor_084(self): """ Test constructor with argumentString="store", validate=False. """ options = Options(argumentString="store", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["store", ], options.actions) def testConstructor_085(self): """ Test constructor with argumentList=["purge", ], validate=False. """ options = Options(argumentList=["purge", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["purge", ], options.actions) def testConstructor_086(self): """ Test constructor with argumentString="purge", validate=False. """ options = Options(argumentString="purge", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["purge", ], options.actions) def testConstructor_087(self): """ Test constructor with argumentList=["rebuild", ], validate=False. """ options = Options(argumentList=["rebuild", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["rebuild", ], options.actions) def testConstructor_088(self): """ Test constructor with argumentString="rebuild", validate=False. """ options = Options(argumentString="rebuild", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["rebuild", ], options.actions) def testConstructor_089(self): """ Test constructor with argumentList=["validate", ], validate=False. """ options = Options(argumentList=["validate", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["validate", ], options.actions) def testConstructor_090(self): """ Test constructor with argumentString="validate", validate=False. """ options = Options(argumentString="validate", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["validate", ], options.actions) def testConstructor_091(self): """ Test constructor with argumentList=["collect", "all", ], validate=False. """ options = Options(argumentList=["collect", "all", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "all", ], options.actions) def testConstructor_092(self): """ Test constructor with argumentString="collect all", validate=False. """ options = Options(argumentString="collect all", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "all", ], options.actions) def testConstructor_093(self): """ Test constructor with argumentList=["collect", "rebuild", ], validate=False. """ options = Options(argumentList=["collect", "rebuild", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "rebuild", ], options.actions) def testConstructor_094(self): """ Test constructor with argumentString="collect rebuild", validate=False. """ options = Options(argumentString="collect rebuild", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "rebuild", ], options.actions) def testConstructor_095(self): """ Test constructor with argumentList=["collect", "validate", ], validate=False. """ options = Options(argumentList=["collect", "validate", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "validate", ], options.actions) def testConstructor_096(self): """ Test constructor with argumentString="collect validate", validate=False. """ options = Options(argumentString="collect validate", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "validate", ], options.actions) def testConstructor_097(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=False. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "stage", ], options.actions) def testConstructor_098(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 collect stage", validate=False. """ options = Options(argumentString="-d --verbose -O --mode 600 collect stage", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "stage", ], options.actions) def testConstructor_099(self): """ Test constructor with argumentList=[], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=[], validate=True) def testConstructor_100(self): """ Test constructor with argumentString="", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="", validate=True) def testConstructor_101(self): """ Test constructor with argumentList=["--help", ], validate=True. """ options = Options(argumentList=["--help", ], validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_102(self): """ Test constructor with argumentString="--help", validate=True. """ options = Options(argumentString="--help", validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_103(self): """ Test constructor with argumentList=["-h", ], validate=True. """ options = Options(argumentList=["-h", ], validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_104(self): """ Test constructor with argumentString="-h", validate=True. """ options = Options(argumentString="-h", validate=True) self.failUnlessEqual(True, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_105(self): """ Test constructor with argumentList=["--version", ], validate=True. """ options = Options(argumentList=["--version", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_106(self): """ Test constructor with argumentString="--version", validate=True. """ options = Options(argumentString="--version", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_107(self): """ Test constructor with argumentList=["-V", ], validate=True. """ options = Options(argumentList=["-V", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_108(self): """ Test constructor with argumentString="-V", validate=True. """ options = Options(argumentString="-V", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(True, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_109(self): """ Test constructor with argumentList=["--verbose", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--verbose", ], validate=True) def testConstructor_110(self): """ Test constructor with argumentString="--verbose", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--verbose", validate=True) def testConstructor_111(self): """ Test constructor with argumentList=["-b", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-b", ], validate=True) def testConstructor_112(self): """ Test constructor with argumentString="-b", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-b", validate=True) def testConstructor_113(self): """ Test constructor with argumentList=["--quiet", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--quiet", ], validate=True) def testConstructor_114(self): """ Test constructor with argumentString="--quiet", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--quiet", validate=True) def testConstructor_115(self): """ Test constructor with argumentList=["-q", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-q", ], validate=True) def testConstructor_116(self): """ Test constructor with argumentString="-q", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-q", validate=True) def testConstructor_117(self): """ Test constructor with argumentList=["--config", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--config", ], validate=True) def testConstructor_118(self): """ Test constructor with argumentString="--config", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--config", validate=True) def testConstructor_119(self): """ Test constructor with argumentList=["-c", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-c", ], validate=True) def testConstructor_120(self): """ Test constructor with argumentString="-c", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-c", validate=True) def testConstructor_121(self): """ Test constructor with argumentList=["--config", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--config", "something", ], validate=True) def testConstructor_122(self): """ Test constructor with argumentString="--config something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--config something", validate=True) def testConstructor_123(self): """ Test constructor with argumentList=["-c", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-c", "something", ], validate=True) def testConstructor_124(self): """ Test constructor with argumentString="-c something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-c something", validate=True) def testConstructor_125(self): """ Test constructor with argumentList=["--full", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--full", ], validate=True) def testConstructor_126(self): """ Test constructor with argumentString="--full", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--full", validate=True) def testConstructor_127(self): """ Test constructor with argumentList=["-f", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-f", ], validate=True) def testConstructor_128(self): """ Test constructor with argumentString="-f", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-f", validate=True) def testConstructor_129(self): """ Test constructor with argumentList=["--logfile", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--logfile", ], validate=True) def testConstructor_130(self): """ Test constructor with argumentString="--logfile", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--logfile", validate=True) def testConstructor_131(self): """ Test constructor with argumentList=["-l", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-l", ], validate=True) def testConstructor_132(self): """ Test constructor with argumentString="-l", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-l", validate=True) def testConstructor_133(self): """ Test constructor with argumentList=["--logfile", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--logfile", "something", ], validate=True) def testConstructor_134(self): """ Test constructor with argumentString="--logfile something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--logfile something", validate=True) def testConstructor_135(self): """ Test constructor with argumentList=["-l", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-l", "something", ], validate=True) def testConstructor_136(self): """ Test constructor with argumentString="-l something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-l something", validate=True) def testConstructor_137(self): """ Test constructor with argumentList=["--owner", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--owner", ], validate=True) def testConstructor_138(self): """ Test constructor with argumentString="--owner", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--owner", validate=True) def testConstructor_139(self): """ Test constructor with argumentList=["-o", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-o", ], validate=True) def testConstructor_140(self): """ Test constructor with argumentString="-o", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-o", validate=True) def testConstructor_141(self): """ Test constructor with argumentList=["--owner", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--owner", "something", ], validate=True) def testConstructor_142(self): """ Test constructor with argumentString="--owner something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--owner something", validate=True) def testConstructor_143(self): """ Test constructor with argumentList=["-o", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-o", "something", ], validate=True) def testConstructor_144(self): """ Test constructor with argumentString="-o something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-o something", validate=True) def testConstructor_145(self): """ Test constructor with argumentList=["--owner", "a:b", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--owner", "a:b", ], validate=True) def testConstructor_146(self): """ Test constructor with argumentString="--owner a:b", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--owner a:b", validate=True) def testConstructor_147(self): """ Test constructor with argumentList=["-o", "a:b", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-o", "a:b", ], validate=True) def testConstructor_148(self): """ Test constructor with argumentString="-o a:b", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-o a:b", validate=True) def testConstructor_149(self): """ Test constructor with argumentList=["--mode", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["--mode", ], validate=True) def testConstructor_150(self): """ Test constructor with argumentString="--mode", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="--mode", validate=True) def testConstructor_151(self): """ Test constructor with argumentList=["-m", ], validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentList=["-m", ], validate=True) def testConstructor_152(self): """ Test constructor with argumentString="-m", validate=True. """ self.failUnlessRaises(GetoptError, Options, argumentString="-m", validate=True) def testConstructor_153(self): """ Test constructor with argumentList=["--mode", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--mode", "something", ], validate=True) def testConstructor_154(self): """ Test constructor with argumentString="--mode something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--mode something", validate=True) def testConstructor_155(self): """ Test constructor with argumentList=["-m", "something", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-m", "something", ], validate=True) def testConstructor_156(self): """ Test constructor with argumentString="-m something", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-m something", validate=True) def testConstructor_157(self): """ Test constructor with argumentList=["--mode", "631", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--mode", "631", ], validate=True) def testConstructor_158(self): """ Test constructor with argumentString="--mode 631", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--mode 631", validate=True) def testConstructor_159(self): """ Test constructor with argumentList=["-m", "631", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-m", "631", ], validate=True) def testConstructor_160(self): """ Test constructor with argumentString="-m 631", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-m 631", validate=True) def testConstructor_161(self): """ Test constructor with argumentList=["--output", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--output", ], validate=True) def testConstructor_162(self): """ Test constructor with argumentString="--output", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--output", validate=True) def testConstructor_163(self): """ Test constructor with argumentList=["-O", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-O", ], validate=True) def testConstructor_164(self): """ Test constructor with argumentString="-O", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-O", validate=True) def testConstructor_165(self): """ Test constructor with argumentList=["--debug", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--debug", ], validate=True) def testConstructor_166(self): """ Test constructor with argumentString="--debug", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--debug", validate=True) def testConstructor_167(self): """ Test constructor with argumentList=["-d", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-d", ], validate=True) def testConstructor_168(self): """ Test constructor with argumentString="-d", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-d", validate=True) def testConstructor_169(self): """ Test constructor with argumentList=["--stack", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--stack", ], validate=True) def testConstructor_170(self): """ Test constructor with argumentString="--stack", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--stack", validate=True) def testConstructor_171(self): """ Test constructor with argumentList=["-s", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-s", ], validate=True) def testConstructor_172(self): """ Test constructor with argumentString="-s", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-s", validate=True) def testConstructor_173(self): """ Test constructor with argumentList=["all", ], validate=True. """ options = Options(argumentList=["all", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["all", ], options.actions) def testConstructor_174(self): """ Test constructor with argumentString="all", validate=True. """ options = Options(argumentString="all", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["all", ], options.actions) def testConstructor_175(self): """ Test constructor with argumentList=["collect", ], validate=True. """ options = Options(argumentList=["collect", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", ], options.actions) def testConstructor_176(self): """ Test constructor with argumentString="collect", validate=True. """ options = Options(argumentString="collect", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", ], options.actions) def testConstructor_177(self): """ Test constructor with argumentList=["stage", ], validate=True. """ options = Options(argumentList=["stage", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["stage", ], options.actions) def testConstructor_178(self): """ Test constructor with argumentString="stage", validate=True. """ options = Options(argumentString="stage", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["stage", ], options.actions) def testConstructor_179(self): """ Test constructor with argumentList=["store", ], validate=True. """ options = Options(argumentList=["store", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["store", ], options.actions) def testConstructor_180(self): """ Test constructor with argumentString="store", validate=True. """ options = Options(argumentString="store", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["store", ], options.actions) def testConstructor_181(self): """ Test constructor with argumentList=["purge", ], validate=True. """ options = Options(argumentList=["purge", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["purge", ], options.actions) def testConstructor_182(self): """ Test constructor with argumentString="purge", validate=True. """ options = Options(argumentString="purge", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["purge", ], options.actions) def testConstructor_183(self): """ Test constructor with argumentList=["rebuild", ], validate=True. """ options = Options(argumentList=["rebuild", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["rebuild", ], options.actions) def testConstructor_184(self): """ Test constructor with argumentString="rebuild", validate=True. """ options = Options(argumentString="rebuild", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["rebuild", ], options.actions) def testConstructor_185(self): """ Test constructor with argumentList=["validate", ], validate=True. """ options = Options(argumentList=["validate", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["validate", ], options.actions) def testConstructor_186(self): """ Test constructor with argumentString="validate", validate=True. """ options = Options(argumentString="validate", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["validate", ], options.actions) def testConstructor_187(self): """ Test constructor with argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=True. """ options = Options(argumentList=["-d", "--verbose", "-O", "--mode", "600", "collect", "stage", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "stage", ], options.actions) def testConstructor_188(self): """ Test constructor with argumentString="-d --verbose -O --mode 600 collect stage", validate=True. """ options = Options(argumentString="-d --verbose -O --mode 600 collect stage", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(True, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(0600, options.mode) self.failUnlessEqual(True, options.output) self.failUnlessEqual(True, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual(["collect", "stage", ], options.actions) def testConstructor_189(self): """ Test constructor with argumentList=["--managed", ], validate=False. """ options = Options(argumentList=["--managed", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(True, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_190(self): """ Test constructor with argumentString="--managed", validate=False. """ options = Options(argumentString="--managed", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(True, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_191(self): """ Test constructor with argumentList=["-M", ], validate=False. """ options = Options(argumentList=["-M", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(True, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_192(self): """ Test constructor with argumentString="-M", validate=False. """ options = Options(argumentString="-M", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(True, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_193(self): """ Test constructor with argumentList=["--managed-only", ], validate=False. """ options = Options(argumentList=["--managed-only", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(True, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_194(self): """ Test constructor with argumentString="--managed-only", validate=False. """ options = Options(argumentString="--managed-only", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(True, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_195(self): """ Test constructor with argumentList=["-N", ], validate=False. """ options = Options(argumentList=["-N", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(True, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_196(self): """ Test constructor with argumentString="-N", validate=False. """ options = Options(argumentString="-N", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(True, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(False, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_197(self): """ Test constructor with argumentList=["--managed", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--managed", ], validate=True) def testConstructor_198(self): """ Test constructor with argumentString="--managed", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--managed", validate=True) def testConstructor_199(self): """ Test constructor with argumentList=["-M", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-M", ], validate=True) def testConstructor_200(self): """ Test constructor with argumentString="-M", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-M", validate=True) def testConstructor_201(self): """ Test constructor with argumentList=["--managed-only", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["--managed-only", ], validate=True) def testConstructor_202(self): """ Test constructor with argumentString="--managed-only", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="--managed-only", validate=True) def testConstructor_203(self): """ Test constructor with argumentList=["-N", ], validate=True. """ self.failUnlessRaises(ValueError, Options, argumentList=["-N", ], validate=True) def testConstructor_204(self): """ Test constructor with argumentString="-N", validate=True. """ self.failUnlessRaises(ValueError, Options, argumentString="-N", validate=True) def testConstructor_205(self): """ Test constructor with argumentList=["--diagnostics", ], validate=False. """ options = Options(argumentList=["--diagnostics", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_206(self): """ Test constructor with argumentString="--diagnostics", validate=False. """ options = Options(argumentString="--diagnostics", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_207(self): """ Test constructor with argumentList=["-D", ], validate=False. """ options = Options(argumentList=["-D", ], validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_208(self): """ Test constructor with argumentString="-D", validate=False. """ options = Options(argumentString="-D", validate=False) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_209(self): """ Test constructor with argumentList=["--diagnostics", ], validate=True. """ options = Options(argumentList=["--diagnostics", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_210(self): """ Test constructor with argumentString="--diagnostics", validate=True. """ options = Options(argumentString="--diagnostics", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_211(self): """ Test constructor with argumentList=["-D", ], validate=True. """ options = Options(argumentList=["-D", ], validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) def testConstructor_212(self): """ Test constructor with argumentString="-D", validate=True. """ options = Options(argumentString="-D", validate=True) self.failUnlessEqual(False, options.help) self.failUnlessEqual(False, options.version) self.failUnlessEqual(False, options.verbose) self.failUnlessEqual(False, options.quiet) self.failUnlessEqual(None, options.config) self.failUnlessEqual(False, options.full) self.failUnlessEqual(False, options.managed) self.failUnlessEqual(False, options.managedOnly) self.failUnlessEqual(None, options.logfile) self.failUnlessEqual(None, options.owner) self.failUnlessEqual(None, options.mode) self.failUnlessEqual(False, options.output) self.failUnlessEqual(False, options.debug) self.failUnlessEqual(False, options.stacktrace) self.failUnlessEqual(True, options.diagnostics) self.failUnlessEqual([], options.actions) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes at defaults. """ options1 = Options() options2 = Options() self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes filled in and same. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failUnlessEqual(options1, options2) self.failUnless(options1 == options2) self.failUnless(not options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(options1 >= options2) self.failUnless(not options1 != options2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes filled in, help different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = False options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes filled in, version different. """ options1 = Options() options2 = Options() options1.help = True options1.version = False options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_005(self): """ Test comparison of two identical objects, all attributes filled in, verbose different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = False options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_006(self): """ Test comparison of two identical objects, all attributes filled in, quiet different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = False options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_007(self): """ Test comparison of two identical objects, all attributes filled in, config different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "whatever" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_008(self): """ Test comparison of two identical objects, all attributes filled in, full different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = False options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_009(self): """ Test comparison of two identical objects, all attributes filled in, logfile different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "stuff" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_010(self): """ Test comparison of two identical objects, all attributes filled in, owner different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("c", "d") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_011(self): """ Test comparison of two identical objects, all attributes filled in, mode different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = 0600 options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_012(self): """ Test comparison of two identical objects, all attributes filled in, output different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = False options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_013(self): """ Test comparison of two identical objects, all attributes filled in, debug different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = False options1.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(not options1 < options2) self.failUnless(not options1 <= options2) self.failUnless(options1 > options2) self.failUnless(options1 >= options2) self.failUnless(options1 != options2) def testComparison_014(self): """ Test comparison of two identical objects, all attributes filled in, stacktrace different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = True options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_015(self): """ Test comparison of two identical objects, all attributes filled in, managed different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = False options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_016(self): """ Test comparison of two identical objects, all attributes filled in, managedOnly different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = False options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = "631" options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = False options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) def testComparison_017(self): """ Test comparison of two identical objects, all attributes filled in, diagnostics different. """ options1 = Options() options2 = Options() options1.help = True options1.version = True options1.verbose = True options1.quiet = True options1.config = "config" options1.full = True options1.managed = True options1.managedOnly = True options1.logfile = "logfile" options1.owner = ("a", "b") options1.mode = 0631 options1.output = True options1.debug = True options1.stacktrace = False options1.diagnostics = False options1.actions = ["collect", ] options2.help = True options2.version = True options2.verbose = True options2.quiet = True options2.config = "config" options2.full = True options2.managed = True options2.managedOnly = True options2.logfile = "logfile" options2.owner = ("a", "b") options2.mode = 0631 options2.output = True options2.debug = True options2.stacktrace = False options2.diagnostics = True options2.actions = ["collect", ] self.failIfEqual(options1, options2) self.failUnless(not options1 == options2) self.failUnless(options1 < options2) self.failUnless(options1 <= options2) self.failUnless(not options1 > options2) self.failUnless(not options1 >= options2) self.failUnless(options1 != options2) ########################### # Test buildArgumentList() ########################### def testBuildArgumentList_001(self): """Test with no values set, validate=False.""" options = Options() argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual([], argumentList) def testBuildArgumentList_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--help", ], argumentList) def testBuildArgumentList_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--version", ], argumentList) def testBuildArgumentList_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--verbose", ], argumentList) def testBuildArgumentList_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--quiet", ], argumentList) def testBuildArgumentList_006(self): """Test with config set, validate=False.""" options = Options() options.config = "stuff" argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--config", "stuff", ], argumentList) def testBuildArgumentList_007(self): """Test with full set, validate=False.""" options = Options() options.full = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--full", ], argumentList) def testBuildArgumentList_008(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--logfile", "bogus", ], argumentList) def testBuildArgumentList_009(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--owner", "ken:group", ], argumentList) def testBuildArgumentList_010(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0644 argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--mode", "644", ], argumentList) def testBuildArgumentList_011(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--output", ], argumentList) def testBuildArgumentList_012(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--debug", ], argumentList) def testBuildArgumentList_013(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--stack", ], argumentList) def testBuildArgumentList_014(self): """Test with actions containing one item, validate=False.""" options = Options() options.actions = [ "collect", ] argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["collect", ], argumentList) def testBuildArgumentList_015(self): """Test with actions containing multiple items, validate=False.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["collect", "stage", "store", "purge", ], argumentList) def testBuildArgumentList_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--managed", "--managed-only", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", ], argumentList) def testBuildArgumentList_017(self): """Test with all values set, actions containing multiple items, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--managed", "--managed-only", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", "stage", ], argumentList) def testBuildArgumentList_018(self): """Test with no values set, validate=True.""" options = Options() self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_019(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--help", ], argumentList) def testBuildArgumentList_020(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--version", ], argumentList) def testBuildArgumentList_021(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_022(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_023(self): """Test with config set, validate=True.""" options = Options() options.config = "stuff" self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_024(self): """Test with full set, validate=True.""" options = Options() options.full = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_025(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_026(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_027(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0644 self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_028(self): """Test with output set, validate=True.""" options = Options() options.output = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_029(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_030(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_031(self): """Test with actions containing one item, validate=True.""" options = Options() options.actions = [ "collect", ] argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["collect", ], argumentList) def testBuildArgumentList_032(self): """Test with actions containing multiple items, validate=True.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["collect", "stage", "store", "purge", ], argumentList) def testBuildArgumentList_033(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", ], argumentList) def testBuildArgumentList_034(self): """Test with all values set (except managed ones), actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--help", "--version", "--verbose", "--quiet", "--config", "config", "--full", "--logfile", "logfile", "--owner", "a:b", "--mode", "631", "--output", "--debug", "--stack", "--diagnostics", "collect", "stage", ], argumentList) def testBuildArgumentList_035(self): """Test with managed set, validate=False.""" options = Options() options.managed = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--managed", ], argumentList) def testBuildArgumentList_036(self): """Test with managed set, validate=True.""" options = Options() options.managed = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_037(self): """Test with managedOnly set, validate=False.""" options = Options() options.managedOnly = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--managed-only", ], argumentList) def testBuildArgumentList_038(self): """Test with managedOnly set, validate=True.""" options = Options() options.managedOnly = True self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_039(self): """Test with all values set, actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_040(self): """Test with all values set, actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] self.failUnlessRaises(ValueError, options.buildArgumentList, validate=True) def testBuildArgumentList_041(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=False) self.failUnlessEqual(["--diagnostics", ], argumentList) def testBuildArgumentList_042(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentList = options.buildArgumentList(validate=True) self.failUnlessEqual(["--diagnostics", ], argumentList) ############################# # Test buildArgumentString() ############################# def testBuildArgumentString_001(self): """Test with no values set, validate=False.""" options = Options() argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("", argumentString) def testBuildArgumentString_002(self): """Test with help set, validate=False.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--help ", argumentString) def testBuildArgumentString_003(self): """Test with version set, validate=False.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--version ", argumentString) def testBuildArgumentString_004(self): """Test with verbose set, validate=False.""" options = Options() options.verbose = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--verbose ", argumentString) def testBuildArgumentString_005(self): """Test with quiet set, validate=False.""" options = Options() options.quiet = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--quiet ", argumentString) def testBuildArgumentString_006(self): """Test with config set, validate=False.""" options = Options() options.config = "stuff" argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--config "stuff" ', argumentString) def testBuildArgumentString_007(self): """Test with full set, validate=False.""" options = Options() options.full = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--full ", argumentString) def testBuildArgumentString_008(self): """Test with logfile set, validate=False.""" options = Options() options.logfile = "bogus" argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--logfile "bogus" ', argumentString) def testBuildArgumentString_009(self): """Test with owner set, validate=False.""" options = Options() options.owner = ("ken", "group") argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--owner "ken:group" ', argumentString) def testBuildArgumentString_010(self): """Test with mode set, validate=False.""" options = Options() options.mode = 0644 argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--mode 644 ', argumentString) def testBuildArgumentString_011(self): """Test with output set, validate=False.""" options = Options() options.output = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--output ", argumentString) def testBuildArgumentString_012(self): """Test with debug set, validate=False.""" options = Options() options.debug = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--debug ", argumentString) def testBuildArgumentString_013(self): """Test with stacktrace set, validate=False.""" options = Options() options.stacktrace = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--stack ", argumentString) def testBuildArgumentString_014(self): """Test with actions containing one item, validate=False.""" options = Options() options.actions = [ "collect", ] argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('"collect" ', argumentString) def testBuildArgumentString_015(self): """Test with actions containing multiple items, validate=False.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('"collect" "stage" "store" "purge" ', argumentString) def testBuildArgumentString_016(self): """Test with all values set, actions containing one item, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--help --version --verbose --quiet --config "config" --full --managed --managed-only --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" ', argumentString) def testBuildArgumentString_017(self): """Test with all values set, actions containing multiple items, validate=False.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual('--help --version --verbose --quiet --config "config" --full --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" "stage" ', argumentString) def testBuildArgumentString_018(self): """Test with no values set, validate=True.""" options = Options() self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_019(self): """Test with help set, validate=True.""" options = Options() options.help = True argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual("--help ", argumentString) def testBuildArgumentString_020(self): """Test with version set, validate=True.""" options = Options() options.version = True argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual("--version ", argumentString) def testBuildArgumentString_021(self): """Test with verbose set, validate=True.""" options = Options() options.verbose = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_022(self): """Test with quiet set, validate=True.""" options = Options() options.quiet = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_023(self): """Test with config set, validate=True.""" options = Options() options.config = "stuff" self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_024(self): """Test with full set, validate=True.""" options = Options() options.full = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_025(self): """Test with logfile set, validate=True.""" options = Options() options.logfile = "bogus" self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_026(self): """Test with owner set, validate=True.""" options = Options() options.owner = ("ken", "group") self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_027(self): """Test with mode set, validate=True.""" options = Options() options.mode = 0644 self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_028(self): """Test with output set, validate=True.""" options = Options() options.output = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_029(self): """Test with debug set, validate=True.""" options = Options() options.debug = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_030(self): """Test with stacktrace set, validate=True.""" options = Options() options.stacktrace = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_031(self): """Test with actions containing one item, validate=True.""" options = Options() options.actions = [ "collect", ] argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('"collect" ', argumentString) def testBuildArgumentString_032(self): """Test with actions containing multiple items, validate=True.""" options = Options() options.actions = [ "collect", "stage", "store", "purge", ] argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('"collect" "stage" "store" "purge" ', argumentString) def testBuildArgumentString_033(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('--help --version --verbose --quiet --config "config" --full --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" ', argumentString) def testBuildArgumentString_034(self): """Test with all values set (except managed ones), actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual('--help --version --verbose --quiet --config "config" --full --logfile "logfile" --owner "a:b" --mode 631 --output --debug --stack --diagnostics "collect" "stage" ', argumentString) def testBuildArgumentString_035(self): """Test with managed set, validate=False.""" options = Options() options.managed = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--managed ", argumentString) def testBuildArgumentString_036(self): """Test with managed set, validate=True.""" options = Options() options.managed = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_037(self): """Test with full set, validate=False.""" options = Options() options.managedOnly = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--managed-only ", argumentString) def testBuildArgumentString_038(self): """Test with managedOnly set, validate=True.""" options = Options() options.managedOnly = True self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_039(self): """Test with all values set (except managed ones), actions containing one item, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", ] self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_040(self): """Test with all values set (except managed ones), actions containing multiple items, validate=True.""" options = Options() options.help = True options.version = True options.verbose = True options.quiet = True options.config = "config" options.full = True options.managed = True options.managedOnly = True options.logfile = "logfile" options.owner = ("a", "b") options.mode = "631" options.output = True options.debug = True options.stacktrace = True options.diagnostics = True options.actions = ["collect", "stage", ] self.failUnlessRaises(ValueError, options.buildArgumentString, validate=True) def testBuildArgumentString_041(self): """Test with diagnostics set, validate=False.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=False) self.failUnlessEqual("--diagnostics ", argumentString) def testBuildArgumentString_042(self): """Test with diagnostics set, validate=True.""" options = Options() options.diagnostics = True argumentString = options.buildArgumentString(validate=True) self.failUnlessEqual("--diagnostics ", argumentString) ###################### # TestActionSet class ###################### class TestActionSet(unittest.TestCase): """Tests for the _ActionSet class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################################### # Test constructor, "index" order mode ####################################### def testActionSet_001(self): """ Test with actions=None, extensions=None. """ actions = None extensions = ExtensionsConfig(None, None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_002(self): """ Test with actions=[], extensions=None. """ actions = [] extensions = ExtensionsConfig(None, None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_003(self): """ Test with actions=[], extensions=[]. """ actions = [] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_004(self): """ Test with actions=[ collect ], extensions=[]. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_005(self): """ Test with actions=[ stage ], extensions=[]. """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testActionSet_006(self): """ Test with actions=[ store ], extensions=[]. """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testActionSet_007(self): """ Test with actions=[ purge ], extensions=[]. """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) def testActionSet_008(self): """ Test with actions=[ all ], extensions=[]. """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testActionSet_009(self): """ Test with actions=[ rebuild ], extensions=[]. """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testActionSet_010(self): """ Test with actions=[ validate ], extensions=[]. """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testActionSet_011(self): """ Test with actions=[ collect, collect ], extensions=[]. """ actions = [ "collect", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_012(self): """ Test with actions=[ collect, stage ], extensions=[]. """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_013(self): """ Test with actions=[ collect, store ], extensions=[]. """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_014(self): """ Test with actions=[ collect, purge ], extensions=[]. """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_015(self): """ Test with actions=[ collect, all ], extensions=[]. """ actions = [ "collect", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_016(self): """ Test with actions=[ collect, rebuild ], extensions=[]. """ actions = [ "collect", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_017(self): """ Test with actions=[ collect, validate ], extensions=[]. """ actions = [ "collect", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_018(self): """ Test with actions=[ stage, collect ], extensions=[]. """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_019(self): """ Test with actions=[ stage, stage ], extensions=[]. """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_020(self): """ Test with actions=[ stage, store ], extensions=[]. """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_021(self): """ Test with actions=[ stage, purge ], extensions=[]. """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_022(self): """ Test with actions=[ stage, all ], extensions=[]. """ actions = [ "stage", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_023(self): """ Test with actions=[ stage, rebuild ], extensions=[]. """ actions = [ "stage", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_024(self): """ Test with actions=[ stage, validate ], extensions=[]. """ actions = [ "stage", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_025(self): """ Test with actions=[ store, collect ], extensions=[]. """ actions = [ "store", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_026(self): """ Test with actions=[ store, stage ], extensions=[]. """ actions = [ "store", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_027(self): """ Test with actions=[ store, store ], extensions=[]. """ actions = [ "store", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_028(self): """ Test with actions=[ store, purge ], extensions=[]. """ actions = [ "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_029(self): """ Test with actions=[ store, all ], extensions=[]. """ actions = [ "store", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_030(self): """ Test with actions=[ store, rebuild ], extensions=[]. """ actions = [ "store", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_031(self): """ Test with actions=[ store, validate ], extensions=[]. """ actions = [ "store", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_032(self): """ Test with actions=[ purge, collect ], extensions=[]. """ actions = [ "purge", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_033(self): """ Test with actions=[ purge, stage ], extensions=[]. """ actions = [ "purge", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_034(self): """ Test with actions=[ purge, store ], extensions=[]. """ actions = [ "purge", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_035(self): """ Test with actions=[ purge, purge ], extensions=[]. """ actions = [ "purge", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_036(self): """ Test with actions=[ purge, all ], extensions=[]. """ actions = [ "purge", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_037(self): """ Test with actions=[ purge, rebuild ], extensions=[]. """ actions = [ "purge", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_038(self): """ Test with actions=[ purge, validate ], extensions=[]. """ actions = [ "purge", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_039(self): """ Test with actions=[ all, collect ], extensions=[]. """ actions = [ "all", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_040(self): """ Test with actions=[ all, stage ], extensions=[]. """ actions = [ "all", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_041(self): """ Test with actions=[ all, store ], extensions=[]. """ actions = [ "all", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_042(self): """ Test with actions=[ all, purge ], extensions=[]. """ actions = [ "all", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_043(self): """ Test with actions=[ all, all ], extensions=[]. """ actions = [ "all", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_044(self): """ Test with actions=[ all, rebuild ], extensions=[]. """ actions = [ "all", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_045(self): """ Test with actions=[ all, validate ], extensions=[]. """ actions = [ "all", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_046(self): """ Test with actions=[ rebuild, collect ], extensions=[]. """ actions = [ "rebuild", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_047(self): """ Test with actions=[ rebuild, stage ], extensions=[]. """ actions = [ "rebuild", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_048(self): """ Test with actions=[ rebuild, store ], extensions=[]. """ actions = [ "rebuild", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_049(self): """ Test with actions=[ rebuild, purge ], extensions=[]. """ actions = [ "rebuild", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_050(self): """ Test with actions=[ rebuild, all ], extensions=[]. """ actions = [ "rebuild", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_051(self): """ Test with actions=[ rebuild, rebuild ], extensions=[]. """ actions = [ "rebuild", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_052(self): """ Test with actions=[ rebuild, validate ], extensions=[]. """ actions = [ "rebuild", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_053(self): """ Test with actions=[ validate, collect ], extensions=[]. """ actions = [ "validate", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_054(self): """ Test with actions=[ validate, stage ], extensions=[]. """ actions = [ "validate", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_055(self): """ Test with actions=[ validate, store ], extensions=[]. """ actions = [ "validate", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_056(self): """ Test with actions=[ validate, purge ], extensions=[]. """ actions = [ "validate", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_057(self): """ Test with actions=[ validate, all ], extensions=[]. """ actions = [ "validate", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_058(self): """ Test with actions=[ validate, rebuild ], extensions=[]. """ actions = [ "validate", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_059(self): """ Test with actions=[ validate, validate ], extensions=[]. """ actions = [ "validate", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_060(self): """ Test with actions=[ bogus ], extensions=[]. """ actions = [ "bogus", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_061(self): """ Test with actions=[ bogus, collect ], extensions=[]. """ actions = [ "bogus", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_062(self): """ Test with actions=[ bogus, stage ], extensions=[]. """ actions = [ "bogus", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_063(self): """ Test with actions=[ bogus, store ], extensions=[]. """ actions = [ "bogus", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_064(self): """ Test with actions=[ bogus, purge ], extensions=[]. """ actions = [ "bogus", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_065(self): """ Test with actions=[ bogus, all ], extensions=[]. """ actions = [ "bogus", "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_066(self): """ Test with actions=[ bogus, rebuild ], extensions=[]. """ actions = [ "bogus", "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_067(self): """ Test with actions=[ bogus, validate ], extensions=[]. """ actions = [ "bogus", "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_068(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_069(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 50) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_070(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_071(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 50) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_072(self): """ Test with actions=[ all, one ], extensions=[ (one, index 50) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_073(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 50) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_074(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 50) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_075(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_076(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 150) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testActionSet_077(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_078(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 150) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_079(self): """ Test with actions=[ all, one ], extensions=[ (one, index 150) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_080(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 150) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_081(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 150) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_082(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 250) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(250, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_083(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 250) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(250, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_084(self): """ Test with actions=[ store, one ], extensions=[ (one, index 250) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(250, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testActionSet_085(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 250) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(250, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_086(self): """ Test with actions=[ all, one ], extensions=[ (one, index 250) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_087(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 250) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_088(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 250) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 250), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_089(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 350) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(350, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_090(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 350) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(350, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_091(self): """ Test with actions=[ store, one ], extensions=[ (one, index 350) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(350, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_092(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 350) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(350, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testActionSet_093(self): """ Test with actions=[ all, one ], extensions=[ (one, index 350) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_094(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 350) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_095(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 350) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 350), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_096(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 450) ]. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_097(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 450) ]. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_098(self): """ Test with actions=[ store, one ], extensions=[ (one, index 450) ]. """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_099(self): """ Test with actions=[ purge, one ], extensions=[ (one, index 450) ]. """ actions = [ "purge", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_100(self): """ Test with actions=[ all, one ], extensions=[ (one, index 450) ]. """ actions = [ "all", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_101(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, index 450) ]. """ actions = [ "rebuild", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_102(self): """ Test with actions=[ validate, one ], extensions=[ (one, index 450) ]. """ actions = [ "validate", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testActionSet_103(self): """ Test with actions=[ one, one ], extensions=[ (one, index 450) ]. """ actions = [ "one", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(450, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(450, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testActionSet_104(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[]. """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testActionSet_105(self): """ Test with actions=[ stage, purge, collect, store ], extensions=[]. """ actions = [ "stage", "purge", "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testActionSet_106(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)]. """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual(200, actionSet.actionSet[3].index) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual(250, actionSet.actionSet[4].index) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHooks) self.failUnlessEqual(None, actionSet.actionSet[4].postHooks) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual(300, actionSet.actionSet[5].index) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHooks) self.failUnlessEqual(None, actionSet.actionSet[5].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual(350, actionSet.actionSet[6].index) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHooks) self.failUnlessEqual(None, actionSet.actionSet[6].postHooks) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHooks) self.failUnlessEqual(None, actionSet.actionSet[7].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(450, actionSet.actionSet[8].index) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHooks) self.failUnlessEqual(None, actionSet.actionSet[8].postHooks) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testActionSet_107(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], extensions=[ (index 50, 150, 250, 350, 450)]. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual(200, actionSet.actionSet[3].index) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual(250, actionSet.actionSet[4].index) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHooks) self.failUnlessEqual(None, actionSet.actionSet[4].postHooks) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual(300, actionSet.actionSet[5].index) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHooks) self.failUnlessEqual(None, actionSet.actionSet[5].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual(350, actionSet.actionSet[6].index) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHooks) self.failUnlessEqual(None, actionSet.actionSet[6].postHooks) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHooks) self.failUnlessEqual(None, actionSet.actionSet[7].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(450, actionSet.actionSet[8].index) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHooks) self.failUnlessEqual(None, actionSet.actionSet[8].postHooks) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testActionSet_108(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ]. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_109(self): """ Test with actions=[ collect ], extensions=[], hooks=[] """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_110(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PreActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_111(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PostActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_112(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("collect", "something"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_113(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("collect", "something"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_114(self): """ Test with actions=[ collect ], extensions=[], pre- and post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something1"), PostActionHook("collect", "something2") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("collect", "something1"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("collect", "something2"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testActionSet_115(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], hooks=[] """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_116(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], pre-hook on "store" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_117(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], post-hook on "store" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_118(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], pre-hook on "one" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("one", "extension"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_119(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], post-hook on "one" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_120(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], pre- and post-hook on "one" action. """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension2"), PreActionHook("one", "extension1"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("one", "extension1"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension2"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testActionSet_121(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], hooks=[] """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_122(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], pre-hook on "purge" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_123(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], post-hook on "purge" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_124(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], pre-hook on "collect" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual([ PreActionHook("collect", "something"), ], actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_125(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], post-hook on "collect" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual([ PostActionHook("collect", "something"), ], actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_126(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], pre-hook on "one" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("one", "extension"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_127(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], post-hook on "one" action """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_128(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], set of various pre- and post hooks. """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something1"), PreActionHook("collect", "something2"), PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual([ PreActionHook("collect", "something1"), PreActionHook("collect", "something2") ], actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testActionSet_129(self): """ Test with actions=[ stage, one ], extensions=[ (one, index 50) ], set of various pre- and post hooks. """ actions = [ "stage", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something1"), PreActionHook("collect", "something2"), PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual([ PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2") ], actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) ############################################ # Test constructor, "dependency" order mode ############################################ def testDependencyMode_001(self): """ Test with actions=None, extensions=None. """ actions = None extensions = ExtensionsConfig(None, "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_002(self): """ Test with actions=[], extensions=None. """ actions = [] extensions = ExtensionsConfig(None, "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_003(self): """ Test with actions=[], extensions=[]. """ actions = [] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_004(self): """ Test with actions=[ collect ], extensions=[]. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_005(self): """ Test with actions=[ stage ], extensions=[]. """ actions = [ "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testDependencyMode_006(self): """ Test with actions=[ store ], extensions=[]. """ actions = [ "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testDependencyMode_007(self): """ Test with actions=[ purge ], extensions=[]. """ actions = [ "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) def testDependencyMode_008(self): """ Test with actions=[ all ], extensions=[]. """ actions = [ "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testDependencyMode_009(self): """ Test with actions=[ rebuild ], extensions=[]. """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testDependencyMode_010(self): """ Test with actions=[ validate ], extensions=[]. """ actions = [ "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testDependencyMode_011(self): """ Test with actions=[ collect, collect ], extensions=[]. """ actions = [ "collect", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_012(self): """ Test with actions=[ collect, stage ], extensions=[]. """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_013(self): """ Test with actions=[ collect, store ], extensions=[]. """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_014(self): """ Test with actions=[ collect, purge ], extensions=[]. """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_015(self): """ Test with actions=[ collect, all ], extensions=[]. """ actions = [ "collect", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_016(self): """ Test with actions=[ collect, rebuild ], extensions=[]. """ actions = [ "collect", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_017(self): """ Test with actions=[ collect, validate ], extensions=[]. """ actions = [ "collect", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_018(self): """ Test with actions=[ stage, collect ], extensions=[]. """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_019(self): """ Test with actions=[ stage, stage ], extensions=[]. """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_020(self): """ Test with actions=[ stage, store ], extensions=[]. """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_021(self): """ Test with actions=[ stage, purge ], extensions=[]. """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_022(self): """ Test with actions=[ stage, all ], extensions=[]. """ actions = [ "stage", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_023(self): """ Test with actions=[ stage, rebuild ], extensions=[]. """ actions = [ "stage", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_024(self): """ Test with actions=[ stage, validate ], extensions=[]. """ actions = [ "stage", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_025(self): """ Test with actions=[ store, collect ], extensions=[]. """ actions = [ "store", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_026(self): """ Test with actions=[ store, stage ], extensions=[]. """ actions = [ "store", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_027(self): """ Test with actions=[ store, store ], extensions=[]. """ actions = [ "store", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_028(self): """ Test with actions=[ store, purge ], extensions=[]. """ actions = [ "store", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_029(self): """ Test with actions=[ store, all ], extensions=[]. """ actions = [ "store", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_030(self): """ Test with actions=[ store, rebuild ], extensions=[]. """ actions = [ "store", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_031(self): """ Test with actions=[ store, validate ], extensions=[]. """ actions = [ "store", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_032(self): """ Test with actions=[ purge, collect ], extensions=[]. """ actions = [ "purge", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_033(self): """ Test with actions=[ purge, stage ], extensions=[]. """ actions = [ "purge", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_034(self): """ Test with actions=[ purge, store ], extensions=[]. """ actions = [ "purge", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_035(self): """ Test with actions=[ purge, purge ], extensions=[]. """ actions = [ "purge", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_036(self): """ Test with actions=[ purge, all ], extensions=[]. """ actions = [ "purge", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_037(self): """ Test with actions=[ purge, rebuild ], extensions=[]. """ actions = [ "purge", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_038(self): """ Test with actions=[ purge, validate ], extensions=[]. """ actions = [ "purge", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_039(self): """ Test with actions=[ all, collect ], extensions=[]. """ actions = [ "all", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_040(self): """ Test with actions=[ all, stage ], extensions=[]. """ actions = [ "all", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_041(self): """ Test with actions=[ all, store ], extensions=[]. """ actions = [ "all", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_042(self): """ Test with actions=[ all, purge ], extensions=[]. """ actions = [ "all", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_043(self): """ Test with actions=[ all, all ], extensions=[]. """ actions = [ "all", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_044(self): """ Test with actions=[ all, rebuild ], extensions=[]. """ actions = [ "all", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_045(self): """ Test with actions=[ all, validate ], extensions=[]. """ actions = [ "all", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_046(self): """ Test with actions=[ rebuild, collect ], extensions=[]. """ actions = [ "rebuild", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_047(self): """ Test with actions=[ rebuild, stage ], extensions=[]. """ actions = [ "rebuild", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_048(self): """ Test with actions=[ rebuild, store ], extensions=[]. """ actions = [ "rebuild", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_049(self): """ Test with actions=[ rebuild, purge ], extensions=[]. """ actions = [ "rebuild", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_050(self): """ Test with actions=[ rebuild, all ], extensions=[]. """ actions = [ "rebuild", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_051(self): """ Test with actions=[ rebuild, rebuild ], extensions=[]. """ actions = [ "rebuild", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_052(self): """ Test with actions=[ rebuild, validate ], extensions=[]. """ actions = [ "rebuild", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_053(self): """ Test with actions=[ validate, collect ], extensions=[]. """ actions = [ "validate", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_054(self): """ Test with actions=[ validate, stage ], extensions=[]. """ actions = [ "validate", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_055(self): """ Test with actions=[ validate, store ], extensions=[]. """ actions = [ "validate", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_056(self): """ Test with actions=[ validate, purge ], extensions=[]. """ actions = [ "validate", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_057(self): """ Test with actions=[ validate, all ], extensions=[]. """ actions = [ "validate", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_058(self): """ Test with actions=[ validate, rebuild ], extensions=[]. """ actions = [ "validate", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_059(self): """ Test with actions=[ validate, validate ], extensions=[]. """ actions = [ "validate", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_060(self): """ Test with actions=[ bogus ], extensions=[]. """ actions = [ "bogus", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_061(self): """ Test with actions=[ bogus, collect ], extensions=[]. """ actions = [ "bogus", "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_062(self): """ Test with actions=[ bogus, stage ], extensions=[]. """ actions = [ "bogus", "stage", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_063(self): """ Test with actions=[ bogus, store ], extensions=[]. """ actions = [ "bogus", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_064(self): """ Test with actions=[ bogus, purge ], extensions=[]. """ actions = [ "bogus", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_065(self): """ Test with actions=[ bogus, all ], extensions=[]. """ actions = [ "bogus", "all", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_066(self): """ Test with actions=[ bogus, rebuild ], extensions=[]. """ actions = [ "bogus", "rebuild", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_067(self): """ Test with actions=[ bogus, validate ], extensions=[]. """ actions = [ "bogus", "validate", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_068(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_069(self): """ Test with actions=[ stage, one ], extensions=[ (one, before stage) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["stage", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_070(self): """ Test with actions=[ store, one ], extensions=[ (one, before store) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_071(self): """ Test with actions=[ purge, one ], extensions=[ (one, before purge) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["purge", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_072(self): """ Test with actions=[ all, one ], extensions=[ (one, before collect) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_073(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, before collect) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_074(self): """ Test with actions=[ validate, one ], extensions=[ (one, before collect) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(["stage", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_075(self): """ Test with actions=[ collect, one ], extensions=[ (one, after collect) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies([], ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_076(self): """ Test with actions=[ stage, one ], extensions=[ (one, after collect) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_077(self): """ Test with actions=[ store, one ], extensions=[ (one, after collect) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies([], ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_078(self): """ Test with actions=[ purge, one ], extensions=[ (one, after collect) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_079(self): """ Test with actions=[ stage, one ], extensions=[ (one, before stage) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["stage", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_080(self): """ Test with actions=[ store, one ], extensions=[ (one, before stage ) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["stage", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_081(self): """ Test with actions=[ purge, one ], extensions=[ (one, before stage) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["stage", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_082(self): """ Test with actions=[ all, one ], extensions=[ (one, after collect) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_083(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after collect) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies([], ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_084(self): """ Test with actions=[ validate, one ], extensions=[ (one, after collect) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(None, ["collect", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_085(self): """ Test with actions=[ collect, one ], extensions=[ (one, after stage) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_086(self): """ Test with actions=[ stage, one ], extensions=[ (one, after stage) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_087(self): """ Test with actions=[ store, one ], extensions=[ (one, after stage) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(None, ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_088(self): """ Test with actions=[ purge, one ], extensions=[ (one, after stage) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_089(self): """ Test with actions=[ collect, one ], extensions=[ (one, before store) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_090(self): """ Test with actions=[ stage, one ], extensions=[ (one, before store) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_091(self): """ Test with actions=[ store, one ], extensions=[ (one, before store) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_092(self): """ Test with actions=[ purge, one ], extensions=[ (one, before store) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_093(self): """ Test with actions=[ all, one ], extensions=[ (one, after stage) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(None, ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_094(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after stage) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies([], ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_095(self): """ Test with actions=[ validate, one ], extensions=[ (one, after stage) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(None, ["stage", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_096(self): """ Test with actions=[ collect, one ], extensions=[ (one, after store) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_097(self): """ Test with actions=[ stage, one ], extensions=[ (one, after store) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_098(self): """ Test with actions=[ store, one ], extensions=[ (one, after store) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testDependencyMode_099(self): """ Test with actions=[ purge, one ], extensions=[ (one, after store) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testDependencyMode_100(self): """ Test with actions=[ collect, one ], extensions=[ (one, before purge) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_101(self): """ Test with actions=[ stage, one ], extensions=[ (one, before purge) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testDependencyMode_102(self): """ Test with actions=[ store, one ], extensions=[ (one, before purge) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testDependencyMode_103(self): """ Test with actions=[ purge, one ], extensions=[ (one, before purge) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_104(self): """ Test with actions=[ all, one ], extensions=[ (one, after store) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_105(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after store) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies(["store", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_106(self): """ Test with actions=[ validate, one ], extensions=[ (one, after store) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(["store", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_107(self): """ Test with actions=[ collect, one ], extensions=[ (one, after purge) ]. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_108(self): """ Test with actions=[ stage, one ], extensions=[ (one, after purge) ]. """ actions = [ "stage", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testDependencyMode_109(self): """ Test with actions=[ store, one ], extensions=[ (one, after purge) ]. """ actions = [ "store", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testDependencyMode_110(self): """ Test with actions=[ purge, one ], extensions=[ (one, after purge) ]. """ actions = [ "purge", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_111(self): """ Test with actions=[ all, one ], extensions=[ (one, after purge) ]. """ actions = [ "all", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_112(self): """ Test with actions=[ rebuild, one ], extensions=[ (one, after purge) ]. """ actions = [ "rebuild", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_113(self): """ Test with actions=[ validate, one ], extensions=[ (one, after purge) ]. """ actions = [ "validate", "one", ] dependencies = ActionDependencies(None, ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_114(self): """ Test with actions=[ one, one ], extensions=[ (one, after purge) ]. """ actions = [ "one", "one", ] dependencies = ActionDependencies([], ["purge", ]) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testDependencyMode_115(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[]. """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testDependencyMode_116(self): """ Test with actions=[ stage, purge, collect, store ], extensions=[]. """ actions = [ "stage", "purge", "collect", "store", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testDependencyMode_117(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ one before collect, two before stage, etc. ]. """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], None) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "purge", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies([], ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHooks) self.failUnlessEqual(None, actionSet.actionSet[4].postHooks) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHooks) self.failUnlessEqual(None, actionSet.actionSet[5].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHooks) self.failUnlessEqual(None, actionSet.actionSet[6].postHooks) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHooks) self.failUnlessEqual(None, actionSet.actionSet[7].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHooks) self.failUnlessEqual(None, actionSet.actionSet[8].postHooks) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testDependencyMode_118(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], extensions=[ one before collect, two before stage, etc. ]. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], []) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "purge", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies(None, ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHooks) self.failUnlessEqual(None, actionSet.actionSet[4].postHooks) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHooks) self.failUnlessEqual(None, actionSet.actionSet[5].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHooks) self.failUnlessEqual(None, actionSet.actionSet[6].postHooks) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHooks) self.failUnlessEqual(None, actionSet.actionSet[7].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHooks) self.failUnlessEqual(None, actionSet.actionSet[8].postHooks) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testDependencyMode_119(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ]. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_120(self): """ Test with actions=[ collect ], extensions=[], hooks=[] """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_121(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PreActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_122(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'stage' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PostActionHook("stage", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_123(self): """ Test with actions=[ collect ], extensions=[], pre-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("collect", "something"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_124(self): """ Test with actions=[ collect ], extensions=[], post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("collect", "something"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_125(self): """ Test with actions=[ collect ], extensions=[], pre- and post-hook on 'collect' action. """ actions = [ "collect", ] extensions = ExtensionsConfig([], "dependency") options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something1"), PostActionHook("collect", "something2") ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("collect", "something1"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("collect", "something2"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testDependencyMode_126(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], hooks=[] """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_127(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], pre-hook on "store" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_128(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], post-hook on "store" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("store", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_129(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], pre-hook on "one" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("one", "extension"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_130(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], post-hook on "one" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_131(self): """ Test with actions=[ one ], extensions=[ (one, before collect) ], pre- and post-hook on "one" action. """ actions = [ "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension2"), PreActionHook("one", "extension1"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("one", "extension1"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension2"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testDependencyMode_132(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], hooks=[] """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_133(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], pre-hook on "purge" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_134(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], post-hook on "purge" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("purge", "rm -f"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_135(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], pre-hook on "collect" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual([ PreActionHook("collect", "something"), ], actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_136(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], post-hook on "collect" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("collect", "something"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual([ PostActionHook("collect", "something"), ], actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_137(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], pre-hook on "one" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PreActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual([ PreActionHook("one", "extension"), ], actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_138(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], post-hook on "one" action """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], []) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_139a(self): """ Test with actions=[ collect, one ], extensions=[ (one, before collect) ], set of various pre- and post hooks. """ actions = [ "collect", "one", ] dependencies = ActionDependencies(["collect", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something1"), PreActionHook("collect", "something2"), PostActionHook("stage", "whatever"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual([ PreActionHook("collect", "something1"), PreActionHook("collect", "something2"), ], actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testDependencyMode_139b(self): """ Test with actions=[ stage, one ], extensions=[ (one, before stage) ], set of various pre- and post hooks. """ actions = [ "stage", "one", ] dependencies = ActionDependencies(["stage", ], None) extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", dependencies=dependencies), ], "dependency") options = OptionsConfig() options = OptionsConfig() options.hooks = [ PostActionHook("one", "extension"), PreActionHook("collect", "something1"), PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2"), ] actionSet = _ActionSet(actions, extensions, options, None, False, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual([ PostActionHook("one", "extension"), ], actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual([ PostActionHook("stage", "whatever1"), PostActionHook("stage", "whatever2"), ], actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testDependencyMode_140(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], extensions= [recursive loop]. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], []) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "purge", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies(["one", ], ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) def testDependencyMode_141(self): """ Test with actions=[ one, five, collect, store, three, stage, four, purge, two ], and one extension for which a dependency does not exist. """ actions = [ "one", "five", "collect", "store", "three", "stage", "four", "purge", "two", ] dependencies1 = ActionDependencies(["collect", "stage", "store", "purge", ], []) dependencies2 = ActionDependencies(["stage", "store", "purge", ], ["collect", ]) dependencies3 = ActionDependencies(["store", "bogus", ], ["collect", "stage", ]) dependencies4 = ActionDependencies(["purge", ], ["collect", "stage", "store", ]) dependencies5 = ActionDependencies([], ["collect", "stage", "store", "purge", ]) eaction1 = ExtendedAction("one", "os.path", "isdir", dependencies=dependencies1) eaction2 = ExtendedAction("two", "os.path", "isfile", dependencies=dependencies2) eaction3 = ExtendedAction("three", "os.path", "islink", dependencies=dependencies3) eaction4 = ExtendedAction("four", "os.path", "isabs", dependencies=dependencies4) eaction5 = ExtendedAction("five", "os.path", "exists", dependencies=dependencies5) extensions = ExtensionsConfig([ eaction1, eaction2, eaction3, eaction4, eaction5, ], "dependency") options = OptionsConfig() self.failUnlessRaises(ValueError, _ActionSet, actions, extensions, options, None, False, True) ######################################### # Test constructor, with managed peers ######################################### def testManagedPeer_001(self): """ Test with actions=[ collect ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testManagedPeer_002(self): """ Test with actions=[ stage ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testManagedPeer_003(self): """ Test with actions=[ store ], extensions=[], peers=None, managed=True, local=True """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testManagedPeer_004(self): """ Test with actions=[ purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) def testManagedPeer_005(self): """ Test with actions=[ all ], extensions=[], peers=None, managed=True, local=True """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_006(self): """ Test with actions=[ rebuild ], extensions=[], peers=None, managed=True, local=True """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testManagedPeer_007(self): """ Test with actions=[ validate ], extensions=[], peers=None, managed=True, local=True """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testManagedPeer_008(self): """ Test with actions=[ collect, stage ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_009(self): """ Test with actions=[ collect, store ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_010(self): """ Test with actions=[ collect, purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_011(self): """ Test with actions=[ stage, collect ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_012(self): """ Test with actions=[ stage, stage ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_013(self): """ Test with actions=[ stage, store ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_014(self): """ Test with actions=[ stage, purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_015(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], peers=None, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testManagedPeer_016(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], peers=None, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_017(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], peers=None, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testManagedPeer_018(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], peers=None, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_019(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], peers=None, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_020(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], peers=None, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual(200, actionSet.actionSet[3].index) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual(250, actionSet.actionSet[4].index) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual(300, actionSet.actionSet[5].index) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual(350, actionSet.actionSet[6].index) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(450, actionSet.actionSet[8].index) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testManagedPeer_021(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], peers=None, managed=True, local=True """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] actionSet = _ActionSet(actions, extensions, options, None, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testManagedPeer_022(self): """ Test with actions=[ collect ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) def testManagedPeer_023(self): """ Test with actions=[ stage ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testManagedPeer_024(self): """ Test with actions=[ store ], extensions=[], no peers, managed=True, local=True """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testManagedPeer_025(self): """ Test with actions=[ purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) def testManagedPeer_026(self): """ Test with actions=[ all ], extensions=[], no peers, managed=True, local=True """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_027(self): """ Test with actions=[ rebuild ], extensions=[], no peers, managed=True, local=True """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testManagedPeer_028(self): """ Test with actions=[ validate ], extensions=[], no peers, managed=True, local=True """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testManagedPeer_029(self): """ Test with actions=[ collect, stage ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_030(self): """ Test with actions=[ collect, store ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_031(self): """ Test with actions=[ collect, purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_032(self): """ Test with actions=[ stage, collect ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_033(self): """ Test with actions=[ stage, stage ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_034(self): """ Test with actions=[ stage, store ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_035(self): """ Test with actions=[ stage, purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) def testManagedPeer_036(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) def testManagedPeer_037(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_038(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failUnlessEqual(isdir, actionSet.actionSet[1].function) def testManagedPeer_039(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_040(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], no peers, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failUnlessEqual(executePurge, actionSet.actionSet[3].function) def testManagedPeer_041(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], no peers, managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failUnlessEqual(executeCollect, actionSet.actionSet[1].function) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("two", actionSet.actionSet[2].name) self.failUnlessEqual(isfile, actionSet.actionSet[2].function) self.failUnlessEqual(200, actionSet.actionSet[3].index) self.failUnlessEqual("stage", actionSet.actionSet[3].name) self.failUnlessEqual(executeStage, actionSet.actionSet[3].function) self.failUnlessEqual(250, actionSet.actionSet[4].index) self.failUnlessEqual("three", actionSet.actionSet[4].name) self.failUnlessEqual(islink, actionSet.actionSet[4].function) self.failUnlessEqual(300, actionSet.actionSet[5].index) self.failUnlessEqual("store", actionSet.actionSet[5].name) self.failUnlessEqual(executeStore, actionSet.actionSet[5].function) self.failUnlessEqual(350, actionSet.actionSet[6].index) self.failUnlessEqual("four", actionSet.actionSet[6].name) self.failUnlessEqual(isabs, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(450, actionSet.actionSet[8].index) self.failUnlessEqual("five", actionSet.actionSet[8].name) self.failUnlessEqual(exists, actionSet.actionSet[8].function) def testManagedPeer_042(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], no peers, managed=True, local=True """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) def testManagedPeer_043(self): """ Test with actions=[ collect ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_044(self): """ Test with actions=[ stage ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_045(self): """ Test with actions=[ store ], extensions=[], no peers, managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_046(self): """ Test with actions=[ purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_047(self): """ Test with actions=[ all ], extensions=[], no peers, managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_048(self): """ Test with actions=[ rebuild ], extensions=[], no peers, managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_049(self): """ Test with actions=[ validate ], extensions=[], no peers, managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_050(self): """ Test with actions=[ collect, stage ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_051(self): """ Test with actions=[ collect, store ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_052(self): """ Test with actions=[ collect, purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_053(self): """ Test with actions=[ stage, collect ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_054(self): """ Test with actions=[ stage, stage ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_055(self): """ Test with actions=[ stage, store ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_056(self): """ Test with actions=[ stage, purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_057(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_058(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], no peers, managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_059(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_060(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], no peers, managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_061(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], no peers, managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_062(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], no peers, managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_063(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], no peers, managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_064(self): """ Test with actions=[ collect ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_065(self): """ Test with actions=[ stage ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_066(self): """ Test with actions=[ store ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_067(self): """ Test with actions=[ purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_068(self): """ Test with actions=[ all ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_069(self): """ Test with actions=[ rebuild ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_070(self): """ Test with actions=[ validate ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_071(self): """ Test with actions=[ collect, stage ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_072(self): """ Test with actions=[ collect, store ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_073(self): """ Test with actions=[ collect, purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_074(self): """ Test with actions=[ stage, collect ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_075(self): """ Test with actions=[ stage, stage ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_076(self): """ Test with actions=[ stage, store ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_077(self): """ Test with actions=[ stage, purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_078(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], one peer (not managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_079(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], one peer (not managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_080(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], one peer (not managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_081(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], one peer (not managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_082(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], one peer (not managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_083(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], one peer (not managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_084(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], one peer (not managed), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_085(self): """ Test with actions=[ collect ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_086(self): """ Test with actions=[ stage ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_087(self): """ Test with actions=[ store ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_088(self): """ Test with actions=[ purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_089(self): """ Test with actions=[ all ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_090(self): """ Test with actions=[ rebuild ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_091(self): """ Test with actions=[ validate ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_092(self): """ Test with actions=[ collect, stage ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_093(self): """ Test with actions=[ collect, store ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_094(self): """ Test with actions=[ collect, purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_095(self): """ Test with actions=[ stage, collect ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_096(self): """ Test with actions=[ stage, stage ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_097(self): """ Test with actions=[ stage, store ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_098(self): """ Test with actions=[ stage, purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_099(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], one peer (managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_100(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], one peer (managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_101(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], one peer (managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_102(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], one peer (managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_103(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], one peer (managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_104(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], one peer (managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failIf(actionSet.actionSet[2].remotePeers is None) self.failUnless(len(actionSet.actionSet[2].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) def testManagedPeer_105(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], one peer (managed), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_106(self): """ Test with actions=[ collect ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_107(self): """ Test with actions=[ stage ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_108(self): """ Test with actions=[ store ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_109(self): """ Test with actions=[ purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_110(self): """ Test with actions=[ all ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_111(self): """ Test with actions=[ rebuild ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_112(self): """ Test with actions=[ validate ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_113(self): """ Test with actions=[ collect, stage ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_114(self): """ Test with actions=[ collect, store ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_115(self): """ Test with actions=[ collect, purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_116(self): """ Test with actions=[ stage, collect ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_117(self): """ Test with actions=[ stage, stage ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_118(self): """ Test with actions=[ stage, store ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_119(self): """ Test with actions=[ stage, purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_120(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_121(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_122(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_123(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_124(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) def testManagedPeer_125(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], two peers (one managed, one not), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failIf(actionSet.actionSet[2].remotePeers is None) self.failUnless(len(actionSet.actionSet[2].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) def testManagedPeer_126(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], two peers (one managed, one not), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=False), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) def testManagedPeer_127(self): """ Test with actions=[ collect ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_128(self): """ Test with actions=[ stage ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_129(self): """ Test with actions=[ store ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_130(self): """ Test with actions=[ purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_131(self): """ Test with actions=[ all ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_132(self): """ Test with actions=[ rebuild ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_133(self): """ Test with actions=[ validate ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_134(self): """ Test with actions=[ collect, stage ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_135(self): """ Test with actions=[ collect, store ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_136(self): """ Test with actions=[ collect, purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_137(self): """ Test with actions=[ stage, collect ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_138(self): """ Test with actions=[ stage, stage ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_139(self): """ Test with actions=[ stage, store ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 0) def testManagedPeer_140(self): """ Test with actions=[ stage, purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_141(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_142(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_143(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=False """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_144(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=False """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_145(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], two peers (both managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_146(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], two peers (both managed), managed=True, local=False """ actions = [ "collect", "stage", "store", "purge", "one", "two", "three", "four", "five", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failIf(actionSet.actionSet[2].remotePeers is None) self.failUnless(len(actionSet.actionSet[2].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[2].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[2].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[2].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[2].remotePeers[1].cbackCommand) def testManagedPeer_147(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=False """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, False) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failIf(actionSet.actionSet[0].remotePeers is None) self.failUnless(len(actionSet.actionSet[0].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[0].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[0].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[0].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[0].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[0].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[0].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[0].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[0].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[0].remotePeers[1].cbackCommand) def testManagedPeer_148(self): """ Test with actions=[ collect ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", None, "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_149(self): """ Test with actions=[ stage ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) def testManagedPeer_150(self): """ Test with actions=[ store ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(300, actionSet.actionSet[0].index) self.failUnlessEqual("store", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[0].function) def testManagedPeer_151(self): """ Test with actions=[ purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(400, actionSet.actionSet[0].index) self.failUnlessEqual("purge", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_152(self): """ Test with actions=[ all ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "all", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failIf(actionSet.actionSet is None) self.failUnless(len(actionSet.actionSet) == 6) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(200, actionSet.actionSet[2].index) self.failUnlessEqual("stage", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[2].function) self.failUnlessEqual(300, actionSet.actionSet[3].index) self.failUnlessEqual("store", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[3].function) self.failUnlessEqual(400, actionSet.actionSet[4].index) self.failUnlessEqual("purge", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHooks) self.failUnlessEqual(None, actionSet.actionSet[4].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[4].function) self.failUnlessEqual(400, actionSet.actionSet[5].index) self.failUnlessEqual("purge", actionSet.actionSet[5].name) self.failIf(actionSet.actionSet[5].remotePeers is None) self.failUnless(len(actionSet.actionSet[5].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[5].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[5].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[5].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[5].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[5].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[5].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[5].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[5].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[5].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[5].remotePeers[1].cbackCommand) def testManagedPeer_153(self): """ Test with actions=[ rebuild ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "rebuild", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("rebuild", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeRebuild, actionSet.actionSet[0].function) def testManagedPeer_154(self): """ Test with actions=[ validate ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "validate", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 1) self.failUnlessEqual(0, actionSet.actionSet[0].index) self.failUnlessEqual("validate", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeValidate, actionSet.actionSet[0].function) def testManagedPeer_155(self): """ Test with actions=[ collect, stage ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(200, actionSet.actionSet[2].index) self.failUnlessEqual("stage", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[2].function) def testManagedPeer_156(self): """ Test with actions=[ collect, store ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) def testManagedPeer_157(self): """ Test with actions=[ collect, purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[2].function) self.failUnlessEqual(400, actionSet.actionSet[3].index) self.failUnlessEqual("purge", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) def testManagedPeer_158(self): """ Test with actions=[ stage, collect ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "collect", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(200, actionSet.actionSet[2].index) self.failUnlessEqual("stage", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[2].function) def testManagedPeer_159(self): """ Test with actions=[ stage, stage ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "stage", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(200, actionSet.actionSet[1].index) self.failUnlessEqual("stage", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[1].function) def testManagedPeer_160(self): """ Test with actions=[ stage, store ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "store", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(300, actionSet.actionSet[1].index) self.failUnlessEqual("store", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[1].function) def testManagedPeer_161(self): """ Test with actions=[ stage, purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "stage", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(200, actionSet.actionSet[0].index) self.failUnlessEqual("stage", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[0].function) self.failUnlessEqual(400, actionSet.actionSet[1].index) self.failUnlessEqual("purge", actionSet.actionSet[1].name) self.failUnlessEqual(None, actionSet.actionSet[1].preHooks) self.failUnlessEqual(None, actionSet.actionSet[1].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[1].function) self.failUnlessEqual(400, actionSet.actionSet[2].index) self.failUnlessEqual("purge", actionSet.actionSet[2].name) self.failIf(actionSet.actionSet[2].remotePeers is None) self.failUnless(len(actionSet.actionSet[2].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[2].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[2].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[2].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[2].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[2].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[2].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[2].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[2].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[2].remotePeers[1].cbackCommand) def testManagedPeer_162(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[2].index) self.failUnlessEqual("collect", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[2].function) self.failUnlessEqual(100, actionSet.actionSet[3].index) self.failUnlessEqual("collect", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) def testManagedPeer_163(self): """ Test with actions=[ store, one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) def testManagedPeer_164(self): """ Test with actions=[ collect, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=True """ actions = [ "collect", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 4) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[2].index) self.failUnlessEqual("one", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[2].function) self.failUnlessEqual(150, actionSet.actionSet[3].index) self.failUnlessEqual("one", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) def testManagedPeer_165(self): """ Test with actions=[ store, one ], extensions=[ (one, index 150) ], two peers (both managed), managed=True, local=True """ actions = [ "store", "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 150), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 3) self.failUnlessEqual(150, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(150, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(300, actionSet.actionSet[2].index) self.failUnlessEqual("store", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[2].function) def testManagedPeer_166(self): """ Test with actions=[ collect, stage, store, purge ], extensions=[], two peers (both managed), managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", ] extensions = ExtensionsConfig([], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 6) self.failUnlessEqual(100, actionSet.actionSet[0].index) self.failUnlessEqual("collect", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[0].function) self.failUnlessEqual(100, actionSet.actionSet[1].index) self.failUnlessEqual("collect", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(200, actionSet.actionSet[2].index) self.failUnlessEqual("stage", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[2].function) self.failUnlessEqual(300, actionSet.actionSet[3].index) self.failUnlessEqual("store", actionSet.actionSet[3].name) self.failUnlessEqual(None, actionSet.actionSet[3].preHooks) self.failUnlessEqual(None, actionSet.actionSet[3].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[3].function) self.failUnlessEqual(400, actionSet.actionSet[4].index) self.failUnlessEqual("purge", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHooks) self.failUnlessEqual(None, actionSet.actionSet[4].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[4].function) self.failUnlessEqual(400, actionSet.actionSet[5].index) self.failUnlessEqual("purge", actionSet.actionSet[5].name) self.failIf(actionSet.actionSet[5].remotePeers is None) self.failUnless(len(actionSet.actionSet[5].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[5].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[5].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[5].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[5].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[5].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[5].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[5].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[5].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[5].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[5].remotePeers[1].cbackCommand) def testManagedPeer_167(self): """ Test with actions=[ collect, stage, store, purge, one, two, three, four, five ], extensions=[ (index 50, 150, 250, 350, 450)], two peers (both managed), managed=True, local=True """ actions = [ "collect", "stage", "store", "purge", "one", "two", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 9) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[2].index) self.failUnlessEqual("collect", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[2].function) self.failUnlessEqual(100, actionSet.actionSet[3].index) self.failUnlessEqual("collect", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[3].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[3].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[3].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[3].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[3].remotePeers[1].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[4].index) self.failUnlessEqual("two", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHooks) self.failUnlessEqual(None, actionSet.actionSet[4].postHooks) self.failUnlessEqual(isfile, actionSet.actionSet[4].function) self.failUnlessEqual(200, actionSet.actionSet[5].index) self.failUnlessEqual("stage", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHooks) self.failUnlessEqual(None, actionSet.actionSet[5].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[5].function) self.failUnlessEqual(300, actionSet.actionSet[6].index) self.failUnlessEqual("store", actionSet.actionSet[6].name) self.failUnlessEqual(None, actionSet.actionSet[6].preHooks) self.failUnlessEqual(None, actionSet.actionSet[6].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[6].function) self.failUnlessEqual(400, actionSet.actionSet[7].index) self.failUnlessEqual("purge", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHooks) self.failUnlessEqual(None, actionSet.actionSet[7].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[7].function) self.failUnlessEqual(400, actionSet.actionSet[8].index) self.failUnlessEqual("purge", actionSet.actionSet[8].name) self.failIf(actionSet.actionSet[8].remotePeers is None) self.failUnless(len(actionSet.actionSet[8].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[8].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[8].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[8].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[8].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[8].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[8].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[8].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[8].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[8].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[8].remotePeers[1].cbackCommand) def testManagedPeer_168(self): """ Test with actions=[ one ], extensions=[ (one, index 50) ], two peers (both managed), managed=True, local=True """ actions = [ "one", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, "ruser", "rcp", "rsh", "cback", managed=True), RemotePeer("remote2", None, "ruser2", "rcp2", "rsh2", "cback2", managed=True), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 2) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[1].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 2) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("ruser", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rsh", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual("remote2", actionSet.actionSet[1].remotePeers[1].name) self.failUnlessEqual("ruser2", actionSet.actionSet[1].remotePeers[1].remoteUser) self.failUnlessEqual(None, actionSet.actionSet[1].remotePeers[1].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[1].remotePeers[1].rshCommand) self.failUnlessEqual("cback2", actionSet.actionSet[1].remotePeers[1].cbackCommand) def testManagedPeer_169(self): """ Test to make sure that various options all seem to be pulled from the right places with mixed data. """ actions = [ "collect", "stage", "store", "purge", "one", "two", ] extensions = ExtensionsConfig([ ExtendedAction("one", "os.path", "isdir", 50), ExtendedAction("two", "os.path", "isfile", 150), ExtendedAction("three", "os.path", "islink", 250), ExtendedAction("four", "os.path", "isabs", 350), ExtendedAction("five", "os.path", "exists", 450), ], None) options = OptionsConfig() options.managedActions = [ "collect", "purge", "one", ] options.backupUser = "userZ" options.rshCommand = "rshZ" options.cbackCommand = "cbackZ" peers = PeersConfig() peers.localPeers = [ LocalPeer("local", "/collect"), ] peers.remotePeers = [ RemotePeer("remote", None, None, None, None, "cback", managed=True), RemotePeer("remote2", None, "ruser2", None, "rsh2", None, managed=True, managedActions=[ "stage", ]), ] actionSet = _ActionSet(actions, extensions, options, peers, True, True) self.failUnless(len(actionSet.actionSet) == 10) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[0].name) self.failUnlessEqual(None, actionSet.actionSet[0].preHooks) self.failUnlessEqual(None, actionSet.actionSet[0].postHooks) self.failUnlessEqual(isdir, actionSet.actionSet[0].function) self.failUnlessEqual(50, actionSet.actionSet[0].index) self.failUnlessEqual("one", actionSet.actionSet[1].name) self.failIf(actionSet.actionSet[1].remotePeers is None) self.failUnless(len(actionSet.actionSet[1].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[1].remotePeers[0].name) self.failUnlessEqual("userZ", actionSet.actionSet[1].remotePeers[0].remoteUser) self.failUnlessEqual("userZ", actionSet.actionSet[1].remotePeers[0].localUser) self.failUnlessEqual("rshZ", actionSet.actionSet[1].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[1].remotePeers[0].cbackCommand) self.failUnlessEqual(100, actionSet.actionSet[2].index) self.failUnlessEqual("collect", actionSet.actionSet[2].name) self.failUnlessEqual(None, actionSet.actionSet[2].preHooks) self.failUnlessEqual(None, actionSet.actionSet[2].postHooks) self.failUnlessEqual(executeCollect, actionSet.actionSet[2].function) self.failUnlessEqual(100, actionSet.actionSet[3].index) self.failUnlessEqual("collect", actionSet.actionSet[3].name) self.failIf(actionSet.actionSet[3].remotePeers is None) self.failUnless(len(actionSet.actionSet[3].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[3].remotePeers[0].name) self.failUnlessEqual("userZ", actionSet.actionSet[3].remotePeers[0].remoteUser) self.failUnlessEqual("userZ", actionSet.actionSet[3].remotePeers[0].localUser) self.failUnlessEqual("rshZ", actionSet.actionSet[3].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[3].remotePeers[0].cbackCommand) self.failUnlessEqual(150, actionSet.actionSet[4].index) self.failUnlessEqual("two", actionSet.actionSet[4].name) self.failUnlessEqual(None, actionSet.actionSet[4].preHooks) self.failUnlessEqual(None, actionSet.actionSet[4].postHooks) self.failUnlessEqual(isfile, actionSet.actionSet[4].function) self.failUnlessEqual(200, actionSet.actionSet[5].index) self.failUnlessEqual("stage", actionSet.actionSet[5].name) self.failUnlessEqual(None, actionSet.actionSet[5].preHooks) self.failUnlessEqual(None, actionSet.actionSet[5].postHooks) self.failUnlessEqual(executeStage, actionSet.actionSet[5].function) self.failUnlessEqual(200, actionSet.actionSet[6].index) self.failUnlessEqual("stage", actionSet.actionSet[6].name) self.failIf(actionSet.actionSet[6].remotePeers is None) self.failUnless(len(actionSet.actionSet[6].remotePeers) == 1) self.failUnlessEqual("remote2", actionSet.actionSet[6].remotePeers[0].name) self.failUnlessEqual("ruser2", actionSet.actionSet[6].remotePeers[0].remoteUser) self.failUnlessEqual("userZ", actionSet.actionSet[6].remotePeers[0].localUser) self.failUnlessEqual("rsh2", actionSet.actionSet[6].remotePeers[0].rshCommand) self.failUnlessEqual("cbackZ", actionSet.actionSet[6].remotePeers[0].cbackCommand) self.failUnlessEqual(300, actionSet.actionSet[7].index) self.failUnlessEqual("store", actionSet.actionSet[7].name) self.failUnlessEqual(None, actionSet.actionSet[7].preHooks) self.failUnlessEqual(None, actionSet.actionSet[7].postHooks) self.failUnlessEqual(executeStore, actionSet.actionSet[7].function) self.failUnlessEqual(400, actionSet.actionSet[8].index) self.failUnlessEqual("purge", actionSet.actionSet[8].name) self.failUnlessEqual(None, actionSet.actionSet[8].preHooks) self.failUnlessEqual(None, actionSet.actionSet[8].postHooks) self.failUnlessEqual(executePurge, actionSet.actionSet[8].function) self.failUnlessEqual(400, actionSet.actionSet[9].index) self.failUnlessEqual("purge", actionSet.actionSet[9].name) self.failIf(actionSet.actionSet[9].remotePeers is None) self.failUnless(len(actionSet.actionSet[9].remotePeers) == 1) self.failUnlessEqual("remote", actionSet.actionSet[9].remotePeers[0].name) self.failUnlessEqual("userZ", actionSet.actionSet[9].remotePeers[0].remoteUser) self.failUnlessEqual("userZ", actionSet.actionSet[9].remotePeers[0].localUser) self.failUnlessEqual("rshZ", actionSet.actionSet[9].remotePeers[0].rshCommand) self.failUnlessEqual("cback", actionSet.actionSet[9].remotePeers[0].cbackCommand) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), unittest.makeSuite(TestOptions, 'test'), unittest.makeSuite(TestActionSet, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/spantests.py0000664000175000017500000001261712560016766022007 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests span tool functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/tools/span.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in tools/span.py. Where possible, we test functions that print output by passing a custom file descriptor. Sometimes, we only ensure that a function or method runs without failure, and we don't validate what its result is or what it prints out. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a SPANTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup2.testutil import captureOutput from CedarBackup2.tools.span import _usage, _version from CedarBackup2.tools.span import Options ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test simple functions ######################## def testSimpleFuncs_001(self): """ Test that the _usage() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_usage) def testSimpleFuncs_002(self): """ Test that the _version() function runs without errors. We don't care what the output is, and we don't check. """ captureOutput(_version) ######################## # TestSpanOptions class ######################## class TestSpanOptions(unittest.TestCase): """Tests for the SpanOptions class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Options() obj.__repr__() obj.__str__() ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), unittest.makeSuite(TestSpanOptions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/utiltests.py0000664000175000017500000042650012642026257022020 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests utility functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # pylint: disable=C0322,C0324 ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/util.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in util.py. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a UTILTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import sys import unittest import tempfile import time import logging import os from os.path import isdir from CedarBackup2.testutil import findResources, removedir, extractTar, buildPath, captureOutput from CedarBackup2.testutil import platformHasEcho, platformWindows, platformCygwin, platformSupportsLinks from CedarBackup2.util import UnorderedList, AbsolutePathList, ObjectTypeList from CedarBackup2.util import RestrictedContentList, RegexMatchList, RegexList from CedarBackup2.util import DirectedGraph, PathResolverSingleton, Diagnostics, parseCommaSeparatedString from CedarBackup2.util import sortDict, resolveCommand, executeCommand, getFunctionReference, encodePath from CedarBackup2.util import convertSize, UNIT_BYTES, UNIT_SECTORS, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.util import displayBytes, deriveDayOfWeek, isStartOfWeek, dereferenceLink from CedarBackup2.util import buildNormalizedPath, splitCommandLine, nullDevice ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data" ] RESOURCES = [ "lotsoflines.py", "tree10.tar.gz", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestUnorderedList class ########################## class TestUnorderedList(unittest.TestCase): """Tests for the UnorderedList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################################## # Test unordered list comparisons ################################## def testComparison_001(self): """ Test two empty lists. """ list1 = UnorderedList() list2 = UnorderedList() self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) def testComparison_002(self): """ Test empty vs. non-empty list. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failIfEqual(list1, list2) self.failIfEqual(list2, list1) def testComparison_003(self): """ Test two non-empty lists, completely different contents. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append('a') list2.append('b') list2.append('c') list2.append('d') self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failUnlessEqual(['a','b','c','d', ], list2) self.failUnlessEqual(['b','c','d','a', ], list2) self.failUnlessEqual(['c','d','a','b', ], list2) self.failUnlessEqual(['d','a','b','c', ], list2) self.failUnlessEqual(list2, ['d','c','b','a', ]) self.failUnlessEqual(list2, ['c','b','a','d', ]) self.failUnlessEqual(list2, ['b','a','d','c', ]) self.failUnlessEqual(list2, ['a','d','c','b', ]) self.failIfEqual(list1, list2) self.failIfEqual(list2, list1) def testComparison_004(self): """ Test two non-empty lists, different but overlapping contents. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append(3) list2.append(4) list2.append('a') list2.append('b') self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failUnlessEqual([3,4,'a','b', ], list2) self.failUnlessEqual([4,'a','b',3, ], list2) self.failUnlessEqual(['a','b',3,4, ], list2) self.failUnlessEqual(['b',3,4,'a', ], list2) self.failUnlessEqual(list2, ['b','a',4,3, ]) self.failUnlessEqual(list2, ['a',4,3,'b', ]) self.failUnlessEqual(list2, [4,3,'b','a', ]) self.failUnlessEqual(list2, [3,'b','a',4, ]) self.failIfEqual(list1, list2) self.failIfEqual(list2, list1) def testComparison_005(self): """ Test two non-empty lists, exactly the same contents, same order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append(1) list2.append(2) list2.append(3) list2.append(4) self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failUnlessEqual([1,2,3,4, ], list2) self.failUnlessEqual([2,3,4,1, ], list2) self.failUnlessEqual([3,4,1,2, ], list2) self.failUnlessEqual([4,1,2,3, ], list2) self.failUnlessEqual(list2, [4,3,2,1, ]) self.failUnlessEqual(list2, [3,2,1,4, ]) self.failUnlessEqual(list2, [2,1,4,3, ]) self.failUnlessEqual(list2, [1,4,3,2, ]) self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) def testComparison_006(self): """ Test two non-empty lists, exactly the same contents, different order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(3) list1.append(4) list2.append(3) list2.append(1) list2.append(2) list2.append(4) self.failUnlessEqual([1,2,3,4, ], list1) self.failUnlessEqual([2,3,4,1, ], list1) self.failUnlessEqual([3,4,1,2, ], list1) self.failUnlessEqual([4,1,2,3, ], list1) self.failUnlessEqual(list1, [4,3,2,1, ]) self.failUnlessEqual(list1, [3,2,1,4, ]) self.failUnlessEqual(list1, [2,1,4,3, ]) self.failUnlessEqual(list1, [1,4,3,2, ]) self.failUnlessEqual([1,2,3,4, ], list2) self.failUnlessEqual([2,3,4,1, ], list2) self.failUnlessEqual([3,4,1,2, ], list2) self.failUnlessEqual([4,1,2,3, ], list2) self.failUnlessEqual(list2, [4,3,2,1, ]) self.failUnlessEqual(list2, [3,2,1,4, ]) self.failUnlessEqual(list2, [2,1,4,3, ]) self.failUnlessEqual(list2, [1,4,3,2, ]) self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) def testComparison_007(self): """ Test two non-empty lists, exactly the same contents, some duplicates, same order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(2) list1.append(3) list1.append(4) list1.append(4) list2.append(1) list2.append(2) list2.append(2) list2.append(3) list2.append(4) list2.append(4) self.failUnlessEqual([1,2,2,3,4,4, ], list1) self.failUnlessEqual([2,2,3,4,1,4, ], list1) self.failUnlessEqual([2,3,4,1,4,2, ], list1) self.failUnlessEqual([2,4,1,4,2,3, ], list1) self.failUnlessEqual(list1, [1,2,2,3,4,4, ]) self.failUnlessEqual(list1, [2,2,3,4,1,4, ]) self.failUnlessEqual(list1, [2,3,4,1,4,2, ]) self.failUnlessEqual(list1, [2,4,1,4,2,3, ]) self.failUnlessEqual([1,2,2,3,4,4, ], list2) self.failUnlessEqual([2,2,3,4,1,4, ], list2) self.failUnlessEqual([2,3,4,1,4,2, ], list2) self.failUnlessEqual([2,4,1,4,2,3, ], list2) self.failUnlessEqual(list2, [1,2,2,3,4,4, ]) self.failUnlessEqual(list2, [2,2,3,4,1,4, ]) self.failUnlessEqual(list2, [2,3,4,1,4,2, ]) self.failUnlessEqual(list2, [2,4,1,4,2,3, ]) self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) def testComparison_008(self): """ Test two non-empty lists, exactly the same contents, some duplicates, different order. """ list1 = UnorderedList() list2 = UnorderedList() list1.append(1) list1.append(2) list1.append(2) list1.append(3) list1.append(4) list1.append(4) list2.append(3) list2.append(1) list2.append(2) list2.append(2) list2.append(4) list2.append(4) self.failUnlessEqual([1,2,2,3,4,4, ], list1) self.failUnlessEqual([2,2,3,4,1,4, ], list1) self.failUnlessEqual([2,3,4,1,4,2, ], list1) self.failUnlessEqual([2,4,1,4,2,3, ], list1) self.failUnlessEqual(list1, [1,2,2,3,4,4, ]) self.failUnlessEqual(list1, [2,2,3,4,1,4, ]) self.failUnlessEqual(list1, [2,3,4,1,4,2, ]) self.failUnlessEqual(list1, [2,4,1,4,2,3, ]) self.failUnlessEqual([1,2,2,3,4,4, ], list2) self.failUnlessEqual([2,2,3,4,1,4, ], list2) self.failUnlessEqual([2,3,4,1,4,2, ], list2) self.failUnlessEqual([2,4,1,4,2,3, ], list2) self.failUnlessEqual(list2, [1,2,2,3,4,4, ]) self.failUnlessEqual(list2, [2,2,3,4,1,4, ]) self.failUnlessEqual(list2, [2,3,4,1,4,2, ]) self.failUnlessEqual(list2, [2,4,1,4,2,3, ]) self.failUnlessEqual(list1, list2) self.failUnlessEqual(list2, list1) ############################# # TestAbsolutePathList class ############################# class TestAbsolutePathList(unittest.TestCase): """Tests for the AbsolutePathList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid absolute path. """ list1 = AbsolutePathList() list1.append("/path/to/something/absolute") self.failUnlessEqual(list1, [ "/path/to/something/absolute", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") list1.append("/path/to/something/else") self.failUnlessEqual(list1, [ "/path/to/something/absolute", "/path/to/something/else", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") self.failUnlessEqual(list1[1], "/path/to/something/else") def testListOperations_002(self): """ Test append() for an invalid, non-absolute path. """ list1 = AbsolutePathList() self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "path/to/something/relative") self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid absolute path. """ list1 = AbsolutePathList() list1.insert(0, "/path/to/something/absolute") self.failUnlessEqual(list1, [ "/path/to/something/absolute", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") list1.insert(0, "/path/to/something/else") self.failUnlessEqual(list1, [ "/path/to/something/else", "/path/to/something/absolute", ]) self.failUnlessEqual(list1[0], "/path/to/something/else") self.failUnlessEqual(list1[1], "/path/to/something/absolute") def testListOperations_004(self): """ Test insert() for an invalid, non-absolute path. """ list1 = AbsolutePathList() self.failUnlessRaises(ValueError, list1.insert, 0, "path/to/something/relative") def testListOperations_005(self): """ Test extend() for a valid absolute path. """ list1 = AbsolutePathList() list1.extend(["/path/to/something/absolute", ]) self.failUnlessEqual(list1, [ "/path/to/something/absolute", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") list1.extend(["/path/to/something/else", ]) self.failUnlessEqual(list1, [ "/path/to/something/absolute", "/path/to/something/else", ]) self.failUnlessEqual(list1[0], "/path/to/something/absolute") self.failUnlessEqual(list1[1], "/path/to/something/else") def testListOperations_006(self): """ Test extend() for an invalid, non-absolute path. """ list1 = AbsolutePathList() self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "path/to/something/relative", ]) self.failUnlessEqual(list1, []) ########################### # TestObjectTypeList class ########################### class TestObjectTypeList(unittest.TestCase): """Tests for the ObjectTypeList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid object type. """ list1 = ObjectTypeList(str, "str") list1.append("string") self.failUnlessEqual(list1, [ "string", ]) self.failUnlessEqual(list1[0], "string") list1.append("string2") self.failUnlessEqual(list1, [ "string", "string2", ]) self.failUnlessEqual(list1[0], "string") self.failUnlessEqual(list1[1], "string2") def testListOperations_002(self): """ Test append() for an invalid object type. """ list1 = ObjectTypeList(str, "str") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, 1) self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid object type. """ list1 = ObjectTypeList(str, "str") list1.insert(0, "string") self.failUnlessEqual(list1, [ "string", ]) self.failUnlessEqual(list1[0], "string") list1.insert(0, "string2") self.failUnlessEqual(list1, [ "string2", "string", ]) self.failUnlessEqual(list1[0], "string2") self.failUnlessEqual(list1[1], "string") def testListOperations_004(self): """ Test insert() for an invalid object type. """ list1 = ObjectTypeList(str, "str") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, AbsolutePathList()) self.failUnlessEqual(list1, []) def testListOperations_005(self): """ Test extend() for a valid object type. """ list1 = ObjectTypeList(str, "str") list1.extend(["string", ]) self.failUnlessEqual(list1, [ "string", ]) self.failUnlessEqual(list1[0], "string") list1.extend(["string2", ]) self.failUnlessEqual(list1, [ "string", "string2", ]) self.failUnlessEqual(list1[0], "string") self.failUnlessEqual(list1[1], "string2") def testListOperations_006(self): """ Test extend() for an invalid object type. """ list1 = ObjectTypeList(str, "str") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ 12.0, ]) self.failUnlessEqual(list1, []) ################################## # TestRestrictedContentList class ################################## class TestRestrictedContentList(unittest.TestCase): """Tests for the RestrictedContentList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") list1.append("a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.append("b") self.failUnlessEqual(list1, [ "a", "b", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "b") list1.append("c") self.failUnlessEqual(list1, [ "a", "b", "c", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "b") self.failUnlessEqual(list1[2], "c") def testListOperations_002(self): """ Test append() for an invalid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "d") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, 1) self.failUnlessEqual(list1, []) self.failUnlessRaises(AttributeError, list1.append, UnorderedList()) self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") list1.insert(0, "a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.insert(0, "b") self.failUnlessEqual(list1, [ "b", "a", ]) self.failUnlessEqual(list1[0], "b") self.failUnlessEqual(list1[1], "a") list1.insert(0, "c") self.failUnlessEqual(list1, [ "c", "b", "a", ]) self.failUnlessEqual(list1[0], "c") self.failUnlessEqual(list1[1], "b") self.failUnlessEqual(list1[2], "a") def testListOperations_004(self): """ Test insert() for an invalid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "d") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, 1) self.failUnlessEqual(list1, []) self.failUnlessRaises(AttributeError, list1.insert, 0, UnorderedList()) self.failUnlessEqual(list1, []) def testListOperations_005(self): """ Test extend() for a valid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") list1.extend(["a", ]) self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.extend(["b", ]) self.failUnlessEqual(list1, [ "a", "b", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "b") list1.extend(["c", ]) self.failUnlessEqual(list1, [ "a", "b", "c", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "b") self.failUnlessEqual(list1[2], "c") def testListOperations_006(self): """ Test extend() for an invalid value. """ list1 = RestrictedContentList([ "a", "b", "c", ], "values") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, ["d", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [1, ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(AttributeError, list1.extend, [ UnorderedList(), ]) self.failUnlessEqual(list1, []) ########################### # TestRegexMatchList class ########################### class TestRegexMatchList(unittest.TestCase): """Tests for the RegexMatchList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) list1.append("a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.append("1") self.failUnlessEqual(list1, [ "a", "1", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") list1.append("abcd12345") self.failUnlessEqual(list1, [ "a", "1", "abcd12345", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") list1.append("") self.failUnlessEqual(list1, [ "a", "1", "abcd12345", "", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") self.failUnlessEqual(list1[3], "") def testListOperations_002(self): """ Test append() for an invalid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "A") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "ABC") self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.append, 12) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "KEN_12") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, None) self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) list1.insert(0, "a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.insert(0, "1") self.failUnlessEqual(list1, [ "1", "a", ]) self.failUnlessEqual(list1[0], "1") self.failUnlessEqual(list1[1], "a") list1.insert(0, "abcd12345") self.failUnlessEqual(list1, [ "abcd12345", "1", "a", ]) self.failUnlessEqual(list1[0], "abcd12345") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "a") list1.insert(0, "") self.failUnlessEqual(list1, [ "abcd12345", "1", "a", "", ]) self.failUnlessEqual(list1[0], "") self.failUnlessEqual(list1[1], "abcd12345") self.failUnlessEqual(list1[2], "1") self.failUnlessEqual(list1[3], "a") def testListOperations_004(self): """ Test insert() for an invalid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "A") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "ABC") self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.insert, 0, 12) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "KEN_12") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, None) self.failUnlessEqual(list1, []) def testListOperations_005(self): """ Test extend() for a valid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) list1.extend(["a", ]) self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.extend(["1", ]) self.failUnlessEqual(list1, [ "a", "1", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") list1.extend(["abcd12345", ]) self.failUnlessEqual(list1, [ "a", "1", "abcd12345", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") list1.extend(["", ]) self.failUnlessEqual(list1, [ "a", "1", "abcd12345", "", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") self.failUnlessEqual(list1[3], "") def testListOperations_006(self): """ Test extend() for an invalid value, emptyAllowed=True. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=True) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "A", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "ABC", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.extend, [ 12, ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "KEN_12", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ None, ]) self.failUnlessEqual(list1, []) def testListOperations_007(self): """ Test append() for a valid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) list1.append("a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.append("1") self.failUnlessEqual(list1, [ "a", "1", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") list1.append("abcd12345") self.failUnlessEqual(list1, [ "a", "1", "abcd12345", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") def testListOperations_008(self): """ Test append() for an invalid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "A") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "ABC") self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.append, 12) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "KEN_12") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, None) self.failUnlessEqual(list1, []) def testListOperations_009(self): """ Test insert() for a valid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) list1.insert(0, "a") self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.insert(0, "1") self.failUnlessEqual(list1, [ "1", "a", ]) self.failUnlessEqual(list1[0], "1") self.failUnlessEqual(list1[1], "a") list1.insert(0, "abcd12345") self.failUnlessEqual(list1, [ "abcd12345", "1", "a", ]) self.failUnlessEqual(list1[0], "abcd12345") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "a") def testListOperations_010(self): """ Test insert() for an invalid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "A") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "ABC") self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.insert, 0, 12) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "KEN_12") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, "") self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.insert, 0, None) self.failUnlessEqual(list1, []) def testListOperations_011(self): """ Test extend() for a valid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) list1.extend(["a", ]) self.failUnlessEqual(list1, [ "a", ]) self.failUnlessEqual(list1[0], "a") list1.extend(["1", ]) self.failUnlessEqual(list1, [ "a", "1", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") list1.extend(["abcd12345", ]) self.failUnlessEqual(list1, [ "a", "1", "abcd12345", ]) self.failUnlessEqual(list1[0], "a") self.failUnlessEqual(list1[1], "1") self.failUnlessEqual(list1[2], "abcd12345") def testListOperations_012(self): """ Test extend() for an invalid value, emptyAllowed=False. """ list1 = RegexMatchList(r"^[a-z0-9]*$", emptyAllowed=False) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "A", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "ABC", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(TypeError, list1.extend, [ 12, ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "KEN_12", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "", ]) self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ None, ]) self.failUnlessEqual(list1, []) ###################### # TestRegexList class ###################### class TestRegexList(unittest.TestCase): """Tests for the RegexList class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ####################### # Test list operations ####################### def testListOperations_001(self): """ Test append() for a valid regular expresson. """ list1 = RegexList() list1.append(r".*\.jpg") self.failUnlessEqual(list1, [ r".*\.jpg", ]) self.failUnlessEqual(list1[0], r".*\.jpg") list1.append("[a-zA-Z0-9]*") self.failUnlessEqual(list1, [ r".*\.jpg", "[a-zA-Z0-9]*", ]) self.failUnlessEqual(list1[0], r".*\.jpg") self.failUnlessEqual(list1[1], "[a-zA-Z0-9]*") def testListOperations_002(self): """ Test append() for an invalid regular expression. """ list1 = RegexList() self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.append, "*.jpg") self.failUnlessEqual(list1, []) def testListOperations_003(self): """ Test insert() for a valid regular expression. """ list1 = RegexList() list1.insert(0, r".*\.jpg") self.failUnlessEqual(list1, [ r".*\.jpg", ]) self.failUnlessEqual(list1[0], r".*\.jpg") list1.insert(0, "[a-zA-Z0-9]*") self.failUnlessEqual(list1, [ "[a-zA-Z0-9]*", r".*\.jpg", ]) self.failUnlessEqual(list1[0], "[a-zA-Z0-9]*") self.failUnlessEqual(list1[1], r".*\.jpg") def testListOperations_004(self): """ Test insert() for an invalid regular expression. """ list1 = RegexList() self.failUnlessRaises(ValueError, list1.insert, 0, "*.jpg") def testListOperations_005(self): """ Test extend() for a valid regular expression. """ list1 = RegexList() list1.extend([r".*\.jpg", ]) self.failUnlessEqual(list1, [ r".*\.jpg", ]) self.failUnlessEqual(list1[0], r".*\.jpg") list1.extend(["[a-zA-Z0-9]*", ]) self.failUnlessEqual(list1, [ r".*\.jpg", "[a-zA-Z0-9]*", ]) self.failUnlessEqual(list1[0], r".*\.jpg") self.failUnlessEqual(list1[1], "[a-zA-Z0-9]*") def testListOperations_006(self): """ Test extend() for an invalid regular expression. """ list1 = RegexList() self.failUnlessEqual(list1, []) self.failUnlessRaises(ValueError, list1.extend, [ "*.jpg", ]) self.failUnlessEqual(list1, []) ########################## # TestDirectedGraph class ########################## class TestDirectedGraph(unittest.TestCase): """Tests for the DirectedGraph class.""" ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = DirectedGraph("test") obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with a valid name filled in. """ graph = DirectedGraph("Ken") self.failUnlessEqual("Ken", graph.name) def testConstructor_002(self): """ Test constructor with a C{None} name filled in. """ self.failUnlessRaises(ValueError, DirectedGraph, None) ########################## # Test depth first search ########################## def testTopologicalSort_001(self): """ Empty graph. """ graph = DirectedGraph("test") path = graph.topologicalSort() self.failUnlessEqual([], path) def testTopologicalSort_002(self): """ Graph with 1 vertex, no edges. """ graph = DirectedGraph("test") graph.createVertex("1") path = graph.topologicalSort() self.failUnlessEqual([ "1", ], path) def testTopologicalSort_003(self): """ Graph with 2 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", ], path) def testTopologicalSort_004(self): """ Graph with 3 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_005(self): """ Graph with 4 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createVertex("4") path = graph.topologicalSort() self.failUnlessEqual([ "4", "2", "1", "3", ], path) def testTopologicalSort_006(self): """ Graph with 4 vertices, no edges. """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createVertex("4") graph.createVertex("5") path = graph.topologicalSort() self.failUnlessEqual([ "5", "4", "2", "1", "3", ], path) def testTopologicalSort_007(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_008(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_009(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_010(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_011(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_012(self): """ Graph with 3 vertices, in a chain (1->2->3), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("1", "2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_013(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_014(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_015(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_016(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_017(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_018(self): """ Graph with 3 vertices, in a chain (3->2->1), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("3", "2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_019(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_020(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_021(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_022(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_023(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_024(self): """ Graph with 3 vertices, chain and orphan (1->2,3), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_025(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_026(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_027(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_028(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_029(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_030(self): """ Graph with 3 vertices, chain and orphan (1->3,2), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("1", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_031(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_032(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_033(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_034(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_035(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_036(self): """ Graph with 3 vertices, chain and orphan (2->3,1), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("2", "3") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", ], path) def testTopologicalSort_037(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_038(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_039(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_040(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_041(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_042(self): """ Graph with 3 vertices, chain and orphan (2->1,3), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("2", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "3", ], path) def testTopologicalSort_043(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_044(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_045(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_046(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_047(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_048(self): """ Graph with 3 vertices, chain and orphan (3->1,2), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("3", "1") path = graph.topologicalSort() self.failUnlessEqual([ "2", "3", "1", ], path) def testTopologicalSort_049(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (1,2,3) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_050(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (1,3,2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("3") graph.createVertex("2") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "2", "1", ], path) def testTopologicalSort_051(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (2,3,1) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("3") graph.createVertex("1") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_052(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (2,1,3) """ graph = DirectedGraph("test") graph.createVertex("2") graph.createVertex("1") graph.createVertex("3") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "3", "1", "2", ], path) def testTopologicalSort_053(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (3,1,2) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("1") graph.createVertex("2") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2", ], path) def testTopologicalSort_054(self): """ Graph with 3 vertices, chain and orphan (3->2,1), create order (3,2,1) """ graph = DirectedGraph("test") graph.createVertex("3") graph.createVertex("2") graph.createVertex("1") graph.createEdge("3", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "3", "2" ], path) def testTopologicalSort_055(self): """ Graph with 1 vertex, with an edge to itself (1->1). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createEdge("1", "1") self.failUnlessRaises(ValueError, graph.topologicalSort) def testTopologicalSort_056(self): """ Graph with 2 vertices, each with an edge to itself (1->1, 2->2). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createEdge("1", "1") graph.createEdge("2", "2") self.failUnlessRaises(ValueError, graph.topologicalSort) def testTopologicalSort_057(self): """ Graph with 3 vertices, each with an edge to itself (1->1, 2->2, 3->3). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "1") graph.createEdge("2", "2") graph.createEdge("3", "3") self.failUnlessRaises(ValueError, graph.topologicalSort) def testTopologicalSort_058(self): """ Graph with 3 vertices, in a loop (1->2->3->1). """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createEdge("1", "2") graph.createEdge("2", "3") graph.createEdge("3", "1") self.failUnlessRaises(ValueError, graph.topologicalSort) def testTopologicalSort_059(self): """ Graph with 5 vertices, (2, 1->3, 1->4, 1->5) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "5", "4", "3", ], path) def testTopologicalSort_060(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "5", "4", "3", ], path) def testTopologicalSort_061(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "5", "3", "4", ], path) def testTopologicalSort_062(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") path = graph.topologicalSort() self.failUnlessEqual([ "2", "1", "5", "3", "4", ], path) def testTopologicalSort_063(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4, 1->2) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") graph.createEdge("1", "2") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "5", "3", "4", ], path) def testTopologicalSort_064(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4, 1->2, 3->5) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") graph.createEdge("1", "2") graph.createEdge("3", "5") path = graph.topologicalSort() self.failUnlessEqual([ "1", "2", "3", "5", "4", ], path) def testTopologicalSort_065(self): """ Graph with 5 vertices, (1->3, 1->4, 1->5, 2->5, 3->4, 5->4, 5->1) """ graph = DirectedGraph("test") graph.createVertex("1") graph.createVertex("2") graph.createVertex("3") graph.createVertex("4") graph.createVertex("5") graph.createEdge("1", "3") graph.createEdge("1", "4") graph.createEdge("1", "5") graph.createEdge("2", "5") graph.createEdge("3", "4") graph.createEdge("5", "4") graph.createEdge("5", "1") self.failUnlessRaises(ValueError, graph.topologicalSort) ################################## # TestPathResolverSingleton class ################################## class TestPathResolverSingleton(unittest.TestCase): """Tests for the PathResolverSingleton class.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ########################## # Test singleton behavior ########################## def testBehavior_001(self): """ Check behavior of constructor around filling and clearing instance variable. """ PathResolverSingleton._instance = None instance = PathResolverSingleton() self.failIfEqual(None, PathResolverSingleton._instance) self.failUnless(instance is PathResolverSingleton._instance) self.failUnlessRaises(RuntimeError, PathResolverSingleton) PathResolverSingleton._instance = None instance = PathResolverSingleton() self.failIfEqual(None, PathResolverSingleton._instance) self.failUnless(instance is PathResolverSingleton._instance) def testBehavior_002(self): """ Check behavior of getInstance() around filling and clearing instance variable. """ PathResolverSingleton._instance = None instance1 = PathResolverSingleton.getInstance() instance2 = PathResolverSingleton.getInstance() instance3 = PathResolverSingleton.getInstance() self.failIfEqual(None, PathResolverSingleton._instance) self.failUnless(instance1 is PathResolverSingleton._instance) self.failUnless(instance1 is instance2) self.failUnless(instance1 is instance3) PathResolverSingleton._instance = None PathResolverSingleton() instance4 = PathResolverSingleton.getInstance() instance5 = PathResolverSingleton.getInstance() instance6 = PathResolverSingleton.getInstance() self.failUnless(instance1 is not instance4) self.failUnless(instance4 is PathResolverSingleton._instance) self.failUnless(instance4 is instance5) self.failUnless(instance4 is instance6) PathResolverSingleton._instance = None instance7 = PathResolverSingleton.getInstance() instance8 = PathResolverSingleton.getInstance() instance9 = PathResolverSingleton.getInstance() self.failUnless(instance1 is not instance7) self.failUnless(instance4 is not instance7) self.failUnless(instance7 is PathResolverSingleton._instance) self.failUnless(instance7 is instance8) self.failUnless(instance7 is instance9) ############################ # Test lookup functionality ############################ def testLookup_001(self): """ Test that lookup() always returns default when singleton is empty. """ PathResolverSingleton._instance = None instance = PathResolverSingleton.getInstance() result = instance.lookup("whatever") self.failUnlessEqual(result, None) result = instance.lookup("whatever", None) self.failUnlessEqual(result, None) result = instance.lookup("other") self.failUnlessEqual(result, None) result = instance.lookup("other", "default") self.failUnlessEqual(result, "default") def testLookup_002(self): """ Test that lookup() returns proper values when singleton is not empty. """ mappings = { "one" : "/path/to/one", "two" : "/path/to/two" } PathResolverSingleton._instance = None singleton = PathResolverSingleton() singleton.fill(mappings) instance = PathResolverSingleton.getInstance() result = instance.lookup("whatever") self.failUnlessEqual(result, None) result = instance.lookup("whatever", None) self.failUnlessEqual(result, None) result = instance.lookup("other") self.failUnlessEqual(result, None) result = instance.lookup("other", "default") self.failUnlessEqual(result, "default") result = instance.lookup("one") self.failUnlessEqual(result, "/path/to/one") result = instance.lookup("one", None) self.failUnlessEqual(result, "/path/to/one") result = instance.lookup("two", None) self.failUnlessEqual(result, "/path/to/two") result = instance.lookup("two", "default") self.failUnlessEqual(result, "/path/to/two") ######################## # TestDiagnostics class ######################## class TestDiagnostics(unittest.TestCase): """Tests for the Diagnostics class.""" def testMethods_001(self): """ Test the version attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.version is None) self.failIfEqual("", diagnostics.version) def testMethods_002(self): """ Test the interpreter attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.interpreter is None) self.failIfEqual("", diagnostics.interpreter) def testMethods_003(self): """ Test the platform attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.platform is None) self.failIfEqual("", diagnostics.platform) def testMethods_004(self): """ Test the encoding attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.encoding is None) self.failIfEqual("", diagnostics.encoding) def testMethods_005(self): """ Test the locale attribute. """ # pylint: disable=W0104 diagnostics = Diagnostics() diagnostics.locale # might not be set, so just make sure method doesn't fail def testMethods_006(self): """ Test the getValues() method. """ diagnostics = Diagnostics() values = diagnostics.getValues() self.failUnlessEqual(diagnostics.version, values['version']) self.failUnlessEqual(diagnostics.interpreter, values['interpreter']) self.failUnlessEqual(diagnostics.platform, values['platform']) self.failUnlessEqual(diagnostics.encoding, values['encoding']) self.failUnlessEqual(diagnostics.locale, values['locale']) self.failUnlessEqual(diagnostics.timestamp, values['timestamp']) def testMethods_007(self): """ Test the _buildDiagnosticLines() method. """ values = Diagnostics().getValues() lines = Diagnostics()._buildDiagnosticLines() self.failUnlessEqual(len(values), len(lines)) def testMethods_008(self): """ Test the printDiagnostics() method. """ captureOutput(Diagnostics().printDiagnostics) def testMethods_009(self): """ Test the logDiagnostics() method. """ logger = logging.getLogger("CedarBackup2.test") Diagnostics().logDiagnostics(logger.info) def testMethods_010(self): """ Test the timestamp attribute. """ diagnostics = Diagnostics() self.failIf(diagnostics.timestamp is None) self.failIfEqual("", diagnostics.timestamp) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): removedir(self.tmpdir) ################## # Utility methods ################## def getTempfile(self): """Gets a path to a temporary file on disk.""" (fd, name) = tempfile.mkstemp(dir=self.tmpdir) try: os.close(fd) except: pass return name def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ################## # Test sortDict() ################## def testSortDict_001(self): """ Test for empty dictionary. """ d = {} result = sortDict(d) self.failUnlessEqual([], result) def testSortDict_002(self): """ Test for dictionary with one item. """ d = {'a':1} result = sortDict(d) self.failUnlessEqual(['a', ], result) def testSortDict_003(self): """ Test for dictionary with two items, same value. """ d = {'a':1, 'b':1, } result = sortDict(d) self.failUnlessEqual(['a', 'b', ], result) def testSortDict_004(self): """ Test for dictionary with two items, different values. """ d = {'a':1, 'b':2, } result = sortDict(d) self.failUnlessEqual(['a', 'b', ], result) def testSortDict_005(self): """ Test for dictionary with many items, same and different values. """ d = {'rebuild': 0, 'purge': 400, 'collect': 100, 'validate': 0, 'store': 300, 'stage': 200} result = sortDict(d) self.failUnlessEqual(['rebuild', 'validate', 'collect', 'stage', 'store', 'purge', ], result) ############################## # Test getFunctionReference() ############################## def testGetFunctionReference_001(self): """ Check that the search works within "standard" Python namespace. """ module = "os.path" function = "isdir" reference = getFunctionReference(module, function) self.failUnless(isdir is reference) def testGetFunctionReference_002(self): """ Check that the search works for things within CedarBackup2. """ module = "CedarBackup2.util" function = "executeCommand" reference = getFunctionReference(module, function) self.failUnless(executeCommand is reference) ######################## # Test resolveCommand() ######################## def testResolveCommand_001(self): """ Test that the command is echoed back unchanged when singleton is empty. """ PathResolverSingleton._instance = None command = [ "BAD", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "GOOD", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "WHATEVER", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) def testResolveCommand_002(self): """ Test that the command is echoed back unchanged when mapping is not found. """ PathResolverSingleton._instance = None mappings = { "one" : "/path/to/one", "two" : "/path/to/two" } singleton = PathResolverSingleton() singleton.fill(mappings) command = [ "BAD", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "GOOD", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "WHATEVER", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] expected = command[:] result = resolveCommand(command) self.failUnlessEqual(expected, result) def testResolveCommand_003(self): """ Test that the command is echoed back changed appropriately when mapping is found. """ PathResolverSingleton._instance = None mappings = { "one" : "/path/to/one", "two" : "/path/to/two" } singleton = PathResolverSingleton() singleton.fill(mappings) command = [ "one", ] expected = [ "/path/to/one", ] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "two", ] expected = [ "/path/to/two", ] result = resolveCommand(command) self.failUnlessEqual(expected, result) command = [ "two", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] expected = ["/path/to/two", "--verbose", "--debug", 'tvh:asa892831', "blech", "<", ] result = resolveCommand(command) self.failUnlessEqual(expected, result) ######################## # Test executeCommand() ######################## def testExecuteCommand_001(self): """ Execute a command that should succeed, no arguments, returnOutput=False Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_002(self): """ Execute a command that should succeed, one argument, returnOutput=False Command-line: python -V """ command=[sys.executable, ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_003(self): """ Execute a command that should succeed, two arguments, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_004(self): """ Execute a command that should succeed, three arguments, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_005(self): """ Execute a command that should succeed, four arguments, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first second """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_006(self): """ Execute a command that should fail, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_007(self): """ Execute a command that should fail, more arguments, returnOutput=False Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" first second """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_008(self): """ Execute a command that should succeed, no arguments, returnOutput=True Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_009(self): """ Execute a command that should succeed, one argument, returnOutput=True Command-line: python -V """ command=[sys.executable, ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnless(output[0].startswith("Python")) def testExecuteCommand_010(self): """ Execute a command that should succeed, two arguments, returnOutput=True Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=[sys.executable, ] args=["-c", "import sys; print ''; sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_011(self): """ Execute a command that should succeed, three arguments, returnOutput=True Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=[sys.executable, ] args=["-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_012(self): """ Execute a command that should succeed, four arguments, returnOutput=True Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=[sys.executable, ] args=["-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_013(self): """ Execute a command that should fail, returnOutput=True Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=[sys.executable, ] args=["-c", "import sys; print ''; sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failIfEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_014(self): """ Execute a command that should fail, more arguments, returnOutput=True Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=[sys.executable, ] args=["-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True) self.failIfEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_015(self): """ Execute a command that should succeed, no arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_016(self): """ Execute a command that should succeed, one argument, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-V", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_017(self): """ Execute a command that should succeed, two arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_018(self): """ Execute a command that should succeed, three arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_019(self): """ Execute a command that should succeed, four arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_020(self): """ Execute a command that should fail, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_021(self): """ Execute a command that should fail, more arguments, returnOutput=False Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_022(self): """ Execute a command that should succeed, no arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_023(self): """ Execute a command that should succeed, one argument, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-V"] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnless(output[0].startswith("Python")) def testExecuteCommand_024(self): """ Execute a command that should succeed, two arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print ''; sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_025(self): """ Execute a command that should succeed, three arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_026(self): """ Execute a command that should succeed, four arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failUnlessEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_027(self): """ Execute a command that should fail, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print ''; sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failIfEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_028(self): """ Execute a command that should fail, more arguments, returnOutput=True Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True) self.failIfEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_030(self): """ Execute a command that should succeed, no arguments, returnOutput=False, ignoring stderr. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_031(self): """ Execute a command that should succeed, one argument, returnOutput=False, ignoring stderr. Command-line: python -V """ command=[sys.executable, ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_032(self): """ Execute a command that should succeed, two arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_033(self): """ Execute a command that should succeed, three arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_034(self): """ Execute a command that should succeed, four arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first second """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_035(self): """ Execute a command that should fail, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_036(self): """ Execute a command that should fail, more arguments, returnOutput=False, ignoring stderr. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" first second """ command=[sys.executable, ] args=["-c", "import sys; print sys.argv[1:]; sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_037(self): """ Execute a command that should succeed, no arguments, returnOutput=True, ignoring stderr. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_038(self): """ Execute a command that should succeed, one argument, returnOutput=True, ignoring stderr. Command-line: python -V """ command=[sys.executable, ] args=["-V", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(0, len(output)) def testExecuteCommand_039(self): """ Execute a command that should succeed, two arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=[sys.executable, ] args=["-c", "import sys; print ''; sys.exit(0)", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_040(self): """ Execute a command that should succeed, three arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=[sys.executable, ] args=["-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_041(self): """ Execute a command that should succeed, four arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=[sys.executable, ] args=["-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_042(self): """ Execute a command that should fail, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=[sys.executable, ] args=["-c", "import sys; print ''; sys.exit(1)", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_043(self): """ Execute a command that should fail, more arguments, returnOutput=True, ignoring stderr. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=[sys.executable, ] args=["-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_044(self): """ Execute a command that should succeed, no arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_045(self): """ Execute a command that should succeed, one argument, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-V", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_046(self): """ Execute a command that should succeed, two arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_047(self): """ Execute a command that should succeed, three arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_048(self): """ Execute a command that should succeed, four arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_049(self): """ Execute a command that should fail, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_050(self): """ Execute a command that should fail, more arguments, returnOutput=False, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print sys.argv[1:]; sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print sys.argv[1:]; sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=False, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(None, output) def testExecuteCommand_051(self): """ Execute a command that should succeed, no arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_052(self): """ Execute a command that should succeed, one argument, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-V"] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(0, len(output)) def testExecuteCommand_053(self): """ Execute a command that should succeed, two arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print ''; sys.exit(0)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_054(self): """ Execute a command that should succeed, three arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_055(self): """ Execute a command that should succeed, four arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failUnlessEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_056(self): """ Execute a command that should fail, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print ''; sys.exit(1)", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_057(self): """ Execute a command that should fail, more arguments, returnOutput=True, ignoring stderr. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] args=[] (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) self.failIfEqual(0, result) self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_058(self): """ Execute a command that should succeed, no arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: echo """ if platformHasEcho(): command=["echo", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_059(self): """ Execute a command that should succeed, one argument, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -V """ command=[sys.executable, "-V"] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnless(output[0].startswith("Python")) def testExecuteCommand_060(self): """ Execute a command that should succeed, two arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(0)" """ command=[sys.executable, "-c", "import sys; print ''; sys.exit(0)", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_061(self): """ Execute a command that should succeed, three arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % (sys.argv[1]); sys.exit(0)" first """ command=[sys.executable, "-c", "import sys; print '%s' % (sys.argv[1]); sys.exit(0)", "first", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) def testExecuteCommand_062(self): """ Execute a command that should succeed, four arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)" first second """ command=[sys.executable, "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(0)", "first", "second", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_063(self): """ Execute a command that should fail, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print ''; sys.exit(1)" """ command=[sys.executable, "-c", "import sys; print ''; sys.exit(1)", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failIfEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(1, len(output)) self.failUnlessEqual(os.linesep, output[0]) def testExecuteCommand_064(self): """ Execute a command that should fail, more arguments, returnOutput=False, using outputFile. Do this all bundled into the command list, just to check that this works as expected. Command-line: python -c "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)" first second """ command=[sys.executable, "-c", "import sys; print '%s' % sys.argv[1]; print '%s' % sys.argv[2]; sys.exit(1)", "first", "second", ] args=[] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failIfEqual(0, result) self.failUnless(os.path.exists(filename)) output = open(filename).readlines() self.failUnlessEqual(2, len(output)) self.failUnlessEqual("first%s" % os.linesep, output[0]) self.failUnlessEqual("second%s" % os.linesep, output[1]) def testExecuteCommand_065(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stdout, and ignoreStderr should be True. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "stdout", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=True, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000, length) def testExecuteCommand_066(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stdout, and ignoreStderr should be False. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "stdout", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=False, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000, length) def testExecuteCommand_067(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stderr, and ignoreStderr should be True. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "stderr", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=True, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(0, length) def testExecuteCommand_068(self): """ Execute a command with a huge amount of output all on stdout. The output should contain only data on stdout, and ignoreStderr should be False. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "stderr", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=False, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000, length) def testExecuteCommand_069(self): """ Execute a command with a huge amount of output all on stdout. The output should contain data on stdout and stderr, and ignoreStderr should be True. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "both", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=True, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000, length) def testExecuteCommand_070(self): """ Execute a command with a huge amount of output all on stdout. The output should contain data on stdout and stderr, and ignoreStderr should be False. This test helps confirm that the function doesn't hang when there is either a lot of data or a lot of data to ignore. """ lotsoflines = self.resources['lotsoflines.py'] command=[sys.executable, lotsoflines, "both", ] args = [] filename = self.getTempfile() outputFile = open(filename, "w") try: result = executeCommand(command, args, ignoreStderr=False, returnOutput=False, outputFile=outputFile)[0] finally: outputFile.close() self.failUnlessEqual(0, result) length = 0 contents = open(filename) for i in contents: length += 1 self.failUnlessEqual(100000*2, length) #################### # Test encodePath() #################### def testEncodePath_002(self): """ Test with a simple string, empty. """ path = "" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_003(self): """ Test with an simple string, an ascii word. """ path = "whatever" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_004(self): """ Test with simple string, a complete path. """ path = "/usr/share/doc/xmltv/README.Debian" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_005(self): """ Test with simple string, a non-ascii path. """ path = "\xe2\x99\xaa\xe2\x99\xac" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_006(self): """ Test with a simple string, empty. """ path = u"" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_007(self): """ Test with an simple string, an ascii word. """ path = u"whatever" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_008(self): """ Test with simple string, a complete path. """ path = u"/usr/share/doc/xmltv/README.Debian" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) self.failUnlessEqual(path, safePath) def testEncodePath_009(self): """ Test with simple string, a non-ascii path. The result is different for a UTF-8 encoding than other non-ANSI encodings. However, opening the original path and then the encoded path seems to result in the exact same file on disk, so the test is valid. """ encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() if not platformCygwin() and encoding != 'mbcs' and encoding.find("ANSI") != 0: # test can't work on some filesystems path = u"\xe2\x99\xaa\xe2\x99\xac" safePath = encodePath(path) self.failUnless(isinstance(safePath, str)) if encoding.upper() == "UTF-8": # apparently, some platforms have "utf-8", some have "UTF-8" self.failUnlessEqual('\xc3\xa2\xc2\x99\xc2\xaa\xc3\xa2\xc2\x99\xc2\xac', safePath) else: self.failUnlessEqual("\xe2\x99\xaa\xe2\x99\xac", safePath) ##################### # Test convertSize() ###################### def testConvertSize_001(self): """ Test valid conversion from bytes to bytes. """ fromUnit = UNIT_BYTES toUnit = UNIT_BYTES size = 10.0 result = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(result, size) def testConvertSize_002(self): """ Test valid conversion from sectors to bytes and back. """ fromUnit = UNIT_SECTORS toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(10*2048, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_003(self): """ Test valid conversion from kbytes to bytes and back. """ fromUnit = UNIT_KBYTES toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(10*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_004(self): """ Test valid conversion from mbytes to bytes and back. """ fromUnit = UNIT_MBYTES toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(10*1024*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_005(self): """ Test valid conversion from gbytes to bytes and back. """ fromUnit = UNIT_GBYTES toUnit = UNIT_BYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(10*1024*1024*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_006(self): """ Test valid conversion from mbytes to kbytes and back. """ fromUnit = UNIT_MBYTES toUnit = UNIT_KBYTES size = 10 result1 = convertSize(size, fromUnit, toUnit) self.failUnlessEqual(size*1024, result1) result2 = convertSize(result1, toUnit, fromUnit) self.failUnlessEqual(result2, size) def testConvertSize_007(self): """ Test with an invalid from unit (None). """ fromUnit = None toUnit = UNIT_BYTES size = 10 self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_008(self): """ Test with an invalid from unit. """ fromUnit = 333 toUnit = UNIT_BYTES size = 10 self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_009(self): """ Test with an invalid to unit (None) """ fromUnit = UNIT_BYTES toUnit = None size = 10 self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_010(self): """ Test with an invalid to unit. """ fromUnit = UNIT_BYTES toUnit = "ken" size = 10 self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_011(self): """ Test with an invalid quantity (None) """ fromUnit = UNIT_BYTES toUnit = UNIT_BYTES size = None self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) def testConvertSize_012(self): """ Test with an invalid quantity (not a floating point). """ fromUnit = UNIT_BYTES toUnit = UNIT_BYTES size = "blech" self.failUnlessRaises(ValueError, convertSize, size, fromUnit, toUnit) #################### # Test nullDevice() ##################### def testNullDevice_001(self): """ Test that the function behaves sensibly on Windows and non-Windows platforms. """ device = nullDevice() if platformWindows(): self.failUnlessEqual("NUL", device.upper()) else: self.failUnlessEqual("/dev/null", device) ###################### # Test displayBytes() ###################### def testDisplayBytes_001(self): """ Test display for a positive value < 1 KB """ bytes = 12 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("12 bytes", result) result = displayBytes(bytes, 3) self.failUnlessEqual("12 bytes", result) def testDisplayBytes_002(self): """ Test display for a negative value < 1 KB """ bytes = -12 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("-12 bytes", result) result = displayBytes(bytes, 3) self.failUnlessEqual("-12 bytes", result) def testDisplayBytes_003(self): """ Test display for a positive value = 1kB """ bytes = 1024 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("1.00 kB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("1.000 kB", result) def testDisplayBytes_004(self): """ Test display for a positive value >= 1kB """ bytes = 5678 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("5.54 kB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("5.545 kB", result) def testDisplayBytes_005(self): """ Test display for a negative value >= 1kB """ bytes = -5678 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("-5.54 kB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("-5.545 kB", result) def testDisplayBytes_006(self): """ Test display for a positive value = 1MB """ bytes = 1024.0 * 1024.0 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("1.00 MB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("1.000 MB", result) def testDisplayBytes_007(self): """ Test display for a positive value >= 1MB """ bytes = 72372224 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("69.02 MB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("69.020 MB", result) def testDisplayBytes_008(self): """ Test display for a negative value >= 1MB """ bytes = -72372224.0 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("-69.02 MB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("-69.020 MB", result) def testDisplayBytes_009(self): """ Test display for a positive value = 1GB """ bytes = 1024.0 * 1024.0 * 1024.0 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("1.00 GB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("1.000 GB", result) def testDisplayBytes_010(self): """ Test display for a positive value >= 1GB """ bytes = 4.4 * 1024.0 * 1024.0 * 1024.0 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("4.40 GB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("4.400 GB", result) def testDisplayBytes_011(self): """ Test display for a negative value >= 1GB """ bytes = -1234567891011 # pylint: disable=W0622 result = displayBytes(bytes) self.failUnlessEqual("-1149.78 GB", result) result = displayBytes(bytes, 3) self.failUnlessEqual("-1149.781 GB", result) def testDisplayBytes_012(self): """ Test display with an invalid quantity (None). """ bytes = None # pylint: disable=W0622 self.failUnlessRaises(ValueError, displayBytes, bytes) def testDisplayBytes_013(self): """ Test display with an invalid quantity (not a floating point). """ bytes = "ken" # pylint: disable=W0622 self.failUnlessRaises(ValueError, displayBytes, bytes) ######################### # Test deriveDayOfWeek() ######################### def testDeriveDayOfWeek_001(self): """ Test for valid day names. """ self.failUnlessEqual(0, deriveDayOfWeek("monday")) self.failUnlessEqual(1, deriveDayOfWeek("tuesday")) self.failUnlessEqual(2, deriveDayOfWeek("wednesday")) self.failUnlessEqual(3, deriveDayOfWeek("thursday")) self.failUnlessEqual(4, deriveDayOfWeek("friday")) self.failUnlessEqual(5, deriveDayOfWeek("saturday")) self.failUnlessEqual(6, deriveDayOfWeek("sunday")) def testDeriveDayOfWeek_002(self): """ Test for invalid day names. """ self.failUnlessEqual(-1, deriveDayOfWeek("bogus")) ####################### # Test isStartOfWeek() ####################### def testIsStartOfWeek001(self): """ Test positive case. """ day = time.localtime().tm_wday if day == 0: result = isStartOfWeek("monday") elif day == 1: result = isStartOfWeek("tuesday") elif day == 2: result = isStartOfWeek("wednesday") elif day == 3: result = isStartOfWeek("thursday") elif day == 4: result = isStartOfWeek("friday") elif day == 5: result = isStartOfWeek("saturday") elif day == 6: result = isStartOfWeek("sunday") self.failUnlessEqual(True, result) def testIsStartOfWeek002(self): """ Test negative case. """ day = time.localtime().tm_wday if day == 0: result = isStartOfWeek("friday") elif day == 1: result = isStartOfWeek("saturday") elif day == 2: result = isStartOfWeek("sunday") elif day == 3: result = isStartOfWeek("monday") elif day == 4: result = isStartOfWeek("tuesday") elif day == 5: result = isStartOfWeek("wednesday") elif day == 6: result = isStartOfWeek("thursday") self.failUnlessEqual(False, result) ############################# # Test buildNormalizedPath() ############################# def testBuildNormalizedPath001(self): """ Test for a None path. """ self.failUnlessRaises(ValueError, buildNormalizedPath, None) def testBuildNormalizedPath002(self): """ Test for an empty path. """ path = "" expected = "" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath003(self): """ Test for "." """ path = "." expected = "_" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath004(self): """ Test for ".." """ path = ".." expected = "_." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath005(self): """ Test for "..........." """ path = ".........." expected = "_........." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath006(self): """ Test for "/" """ path = "/" expected = "-" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath007(self): """ Test for "\\" """ path = "\\" expected = "-" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath008(self): """ Test for "/." """ path = "/." expected = "_" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath009(self): """ Test for "/.." """ path = "/.." expected = "_." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath010(self): """ Test for "/..." """ path = "/..." expected = "_.." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath011(self): r""" Test for "\." """ path = r"\." expected = "_" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath012(self): r""" Test for "\.." """ path = r"\.." expected = "_." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath013(self): r""" Test for "\..." """ path = r"\..." expected = "_.." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath014(self): """ Test for "/var/log/apache/httpd.log.1" """ path = "/var/log/apache/httpd.log.1" expected = "var-log-apache-httpd.log.1" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath015(self): """ Test for "var/log/apache/httpd.log.1" """ path = "var/log/apache/httpd.log.1" expected = "var-log-apache-httpd.log.1" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath016(self): """ Test for "\\var/log/apache\\httpd.log.1" """ path = "\\var/log/apache\\httpd.log.1" expected = "var-log-apache-httpd.log.1" actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) def testBuildNormalizedPath017(self): """ Test for "/Big Nasty Base Path With Spaces/something/else/space s/file. log .2 ." """ path = "/Big Nasty Base Path With Spaces/something/else/space s/file. log .2 ." expected = "Big_Nasty_Base_Path_With_Spaces-something-else-space_s-file.__log___.2_." actual = buildNormalizedPath(path) self.failUnlessEqual(expected, actual) ########################## # Test splitCommandLine() ########################## def testSplitCommandLine_001(self): """ Test for a None command line. """ commandLine = None self.failUnlessRaises(ValueError, splitCommandLine, commandLine) def testSplitCommandLine_002(self): """ Test for an empty command line. """ commandLine = "" result = splitCommandLine(commandLine) self.failUnlessEqual([], result) def testSplitCommandLine_003(self): """ Test for a command line with no quoted arguments. """ commandLine = "cback --verbose stage store purge" result = splitCommandLine(commandLine) self.failUnlessEqual(["cback", "--verbose", "stage", "store", "purge", ], result) def testSplitCommandLine_004(self): """ Test for a command line with double-quoted arguments. """ commandLine = 'cback "this is a really long double-quoted argument"' result = splitCommandLine(commandLine) self.failUnlessEqual(["cback", "this is a really long double-quoted argument", ], result) def testSplitCommandLine_005(self): """ Test for a command line with single-quoted arguments. """ commandLine = "cback 'this is a really long single-quoted argument'" result = splitCommandLine(commandLine) self.failUnlessEqual(["cback", "'this", "is", "a", "really", "long", "single-quoted", "argument'", ], result) ######################### # Test dereferenceLink() ######################### def testDereferenceLink_001(self): """ Test for a path that is a link, absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "link002"]) if platformSupportsLinks(): expected = "file002" else: expected = path actual = dereferenceLink(path, absolute=False) self.failUnlessEqual(expected, actual) def testDereferenceLink_002(self): """ Test for a path that is a link, absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "link002"]) if platformSupportsLinks(): expected = self.buildPath(["tree10", "file002"]) else: expected = path actual = dereferenceLink(path) self.failUnlessEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.failUnlessEqual(expected, actual) def testDereferenceLink_003(self): """ Test for a path that is a file (not a link), absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "file001"]) expected = path actual = dereferenceLink(path, absolute=False) self.failUnlessEqual(expected, actual) def testDereferenceLink_004(self): """ Test for a path that is a file (not a link), absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "file001"]) expected = path actual = dereferenceLink(path) self.failUnlessEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.failUnlessEqual(expected, actual) def testDereferenceLink_005(self): """ Test for a path that is a directory (not a link), absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "dir001"]) expected = path actual = dereferenceLink(path, absolute=False) self.failUnlessEqual(expected, actual) def testDereferenceLink_006(self): """ Test for a path that is a directory (not a link), absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "dir001"]) expected = path actual = dereferenceLink(path) self.failUnlessEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.failUnlessEqual(expected, actual) def testDereferenceLink_007(self): """ Test for a path that does not exist, absolute=false. """ self.extractTar("tree10") path = self.buildPath(["tree10", "blech"]) expected = path actual = dereferenceLink(path, absolute=False) self.failUnlessEqual(expected, actual) def testDereferenceLink_008(self): """ Test for a path that does not exist, absolute=true. """ self.extractTar("tree10") path = self.buildPath(["tree10", "blech"]) expected = path actual = dereferenceLink(path) self.failUnlessEqual(expected, actual) actual = dereferenceLink(path, absolute=True) self.failUnlessEqual(expected, actual) ################################### # Test parseCommaSeparatedString() ################################### def testParseCommaSeparatedString_001(self): """ Test parseCommaSeparatedString() for a None string. """ actual = parseCommaSeparatedString(None) self.failUnlessEqual(None, actual) def testParseCommaSeparatedString_002(self): """ Test parseCommaSeparatedString() for an empty string. """ actual = parseCommaSeparatedString("") self.failUnlessEqual([], actual) def testParseCommaSeparatedString_003(self): """ Test parseCommaSeparatedString() for a string with one value. """ actual = parseCommaSeparatedString("ken") self.failUnlessEqual(["ken", ], actual) def testParseCommaSeparatedString_004(self): """ Test parseCommaSeparatedString() for a string with multiple values, no spaces. """ actual = parseCommaSeparatedString("a,b,c") self.failUnlessEqual(["a", "b", "c", ], actual) def testParseCommaSeparatedString_005(self): """ Test parseCommaSeparatedString() for a string with multiple values, with spaces. """ actual = parseCommaSeparatedString("a, b, c") self.failUnlessEqual(["a", "b", "c", ], actual) def testParseCommaSeparatedString_006(self): """ Test parseCommaSeparatedString() for a string with multiple values, worst-case kind of value. """ actual = parseCommaSeparatedString(" one, two,three, four , five , six, seven,,eight ,") self.failUnlessEqual(["one", "two", "three", "four", "five", "six", "seven", "eight", ], actual) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestUnorderedList, 'test'), unittest.makeSuite(TestAbsolutePathList, 'test'), unittest.makeSuite(TestObjectTypeList, 'test'), unittest.makeSuite(TestRestrictedContentList, 'test'), unittest.makeSuite(TestRegexMatchList, 'test'), unittest.makeSuite(TestRegexList, 'test'), unittest.makeSuite(TestDirectedGraph, 'test'), unittest.makeSuite(TestPathResolverSingleton, 'test'), unittest.makeSuite(TestDiagnostics, 'test'), unittest.makeSuite(TestFunctions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/data/0002775000175000017500000000000012642035650020307 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/testcase/data/cback.conf.70000664000175000017500000000061212555052642022366 0ustar pronovicpronovic00000000000000 /opt/backup/collect daily tar .ignore /etc CedarBackup2-2.26.5/testcase/data/subversion.conf.70000664000175000017500000000233312555052642023524 0ustar pronovicpronovic00000000000000 daily gzip /opt/public/svn/one BDB /opt/public/svn/two weekly software /opt/public/svn/three bzip2 .*software.* FSFS /opt/public/svn/four incr bzip2 cedar banner .*software.* .*database.* CedarBackup2-2.26.5/testcase/data/subversion.conf.30000664000175000017500000000047712555052642023527 0ustar pronovicpronovic00000000000000 /opt/public/svn/software daily gzip CedarBackup2-2.26.5/testcase/data/tree19.tar.gz0000664000175000017500000000165312555052642022555 0ustar pronovicpronovic00000000000000GEj0(}M,=NQ%}:KqG!C/ltnIBUkls8X׿__Ő릩Sh.6aw}W/ incr none /home/jimbo/mail/cedar-backup-users /home/joebob/mail/cedar-backup-users daily gzip /home/frank/mail/cedar-backup-users /home/jimbob/mail bzip2 logomachy-devel /home/billiejoe/mail weekly bzip2 .*SPAM.* /home/billybob/mail debian-devel debian-python .*SPAM.* .*JUNK.* CedarBackup2-2.26.5/testcase/data/capacity.conf.20000664000175000017500000000025412555052642023115 0ustar pronovicpronovic00000000000000 63.2 CedarBackup2-2.26.5/testcase/data/tree8.tar.gz0000664000175000017500000000022412555052642022464 0ustar pronovicpronovic00000000000000wA10 %7nudVnBVbȀ["KR.'הt9%5'YR45J1:Ѡ!8ڮռwosCuۻV p(CedarBackup2-2.26.5/testcase/data/tree1.ini0000664000175000017500000000040712555052642022032 0ustar pronovicpronovic00000000000000; Single-depth directory containing only small files [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 1 mindirs = 0 maxdirs = 0 minfiles = 1 maxfiles = 10 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup2-2.26.5/testcase/data/cback.conf.220000664000175000017500000000045112555052642022444 0ustar pronovicpronovic00000000000000 machine2 remote /opt/backup/collect CedarBackup2-2.26.5/testcase/data/cback.conf.180000664000175000017500000000060612555052642022453 0ustar pronovicpronovic00000000000000 index example something.whatever example 1 CedarBackup2-2.26.5/testcase/data/mbox.conf.20000664000175000017500000000061512555052642022266 0ustar pronovicpronovic00000000000000 daily gzip /home/joebob/mail/cedar-backup-users /home/billiejoe/mail CedarBackup2-2.26.5/testcase/data/tree11.tar.gz0000664000175000017500000000120112555052642022532 0ustar pronovicpronovic00000000000000:An@`y Ag’[* l@11} +qD3'ϘM\b1*;rXvd\1q{c\#WN*7O{$^? sC7b"ZmQ\#c3{+!S-gwCOĆQ>B 'K{ i./_3埔JxRvߘ ݱ֪Nj1V!.M!;coոmhvb6_>9fOLJp3|Bl5:Y}amUo: ۡ?dCS JF]5Ck%_P9_oy̿0JrN[!@d??٧WWOR] u?ed$8`iós%_3/u=PhOY _og_"mvؿwy/FPwC  b+r[?r1uvS5?mE`PCedarBackup2-2.26.5/testcase/data/cback.conf.190000664000175000017500000000356012555052642022456 0ustar pronovicpronovic00000000000000 dependency sysinfo CedarBackup2.extend.sysinfo executeAction mysql CedarBackup2.extend.mysql executeAction postgresql CedarBackup2.extend.postgresql executeAction one subversion CedarBackup2.extend.subversion executeAction one mbox CedarBackup2.extend.mbox executeAction one one encrypt CedarBackup2.extend.encrypt executeAction a,b,c,d one, two,three, four , five , six, seven,,eight , amazons3 CedarBackup2.extend.amazons3 executeAction CedarBackup2-2.26.5/testcase/data/tree13.tar.gz0000664000175000017500000000064612555052642022550 0ustar pronovicpronovic00000000000000JBN0Fy\-7FhIAN#21N30L{6wӴV;%Ҁ󸯌ǴEg (g1QQ2z$>W l4!^Tmg#O_oJm@ZC ]*]_[r(~$It&4MDJof_c4A.otAΉ(aXR굀#NaDѿ v@Bf12>cwÎoElĊna|K8w&F1w.*cA.#گp5&l ]f<Ȃ7 rC00TklvO?'Ӎw3?    ;(CedarBackup2-2.26.5/testcase/data/tree4.ini0000664000175000017500000000042312555052642022033 0ustar pronovicpronovic00000000000000; Higher-depth directory containing small files and directories [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 1 maxdirs = 10 minfiles = 1 maxfiles = 10 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup2-2.26.5/testcase/data/subversion.conf.50000664000175000017500000000050312555052642023517 0ustar pronovicpronovic00000000000000 daily gzip /opt/public/svn/software CedarBackup2-2.26.5/testcase/data/postgresql.conf.30000664000175000017500000000043212555052642023522 0ustar pronovicpronovic00000000000000 user gzip N database CedarBackup2-2.26.5/testcase/data/tree12.tar.gz0000664000175000017500000022266112555052642022552 0ustar pronovicpronovic00000000000000Bstϳ.;m;m۶hǶmsǶm;9sw֬OWwWWUSUvؘegdcc303ҳ100~30137ߛ>>6_Gӿlcnhkd&$Lgfedh a´N646u? ?7?'%3c|s+c|Ws'3|}G/6NfFƆN[|koYB;;ٺ:~;K8Y;#ǿb-ɕDh)juv7v1vw0ַ20171Nfa-_fj|Golq#sLRN %nno-[>ӿ8~ΎV߇u2v"LDʉ V_N;20wtv0L{oW}{fV  ۹4Iw2aMmh+8[ۺuߓ9X+Fs̖̿FHqk0q"iorW88ZegƏJ؁;pmݾRKcwG|׿AM}C`m`n}o(Ǝv{t5374;]iַrk?;;e / 2-NM_fl(--!-۩VVOVQ;4/\-JP6 >2#++8H팍jOj}gwy6p' k1%4" q|W"SC/&F4ȐXB7ےÿ߆YOdb}Zژ~gt?336VߥEB@t#G2>]X"}}Ajm,rː)kBl'HJ=y*KcSm`6R=iٹ |(|knwu:F$7"MF~fǿD*}E|Q5`-VWZFX ϵZ)"7>}MYPKZT~.f[CU{MZ}0b;V>} Ι&4~<5~W&)v>IXDhi2=K.fGUX0ݪ0Lh#>uޞM׀|9RT1ߝid|~t=I63~&il¼{F9ul_9$}UP\4R;K>^س :F]h_[;ɩ,nviz6ca;'76Ӓ+ˋ<7?9m)<׷<9R|8XV9RAkF{]phm\zhEh9ozV{נKvO ThDnt[3߾[u|plux0Ыhc+Fդ+UfߵO $|KQ`o+1ߕDf" KפBɰHi[`N( ˡjC=HqRb@W'9w)Q%)`~WG;&veX> -xq ѻPU^BM9elaoQ=En$y]uubz"_ qVu>׿h[Ej5\空#?;)h/YƜl ^ &U$/H!NX`dztp` Ld{,BgI/J3k}wZM{T \B-剺{-Qd{ȎfVh397JwT)*k0i{ʺ$LƹiXZLĀB:eAzɹ)\ƥu9 kJs|ORW D1,󻝆s:~#|ё q]ujU9"|7Aܷ?35/bD yj}䢣A1Vh'}ģ7^wG}̠\1\i^~nmm."j01ʚq~~?$2*}(.QLp Zs|| }3Pj66OqM[sBbras51@_Uqʕq@++-J] ̭m,+Rx6G]]Ppċkgj]Oa]y@: ma:`KD6t;Xjt㡺#{/.*=ɐl:J݈>rr:} ^W ,Q+>yAʟjetoSl{_[S5=Yn!"v/uĮ=6 i,{՚{*@+;03"~b>< s54;pոeE3{ E! -(D*'x'6tEtx5GpFH TEl2z+4K;@神bBZ\է祜 oNK1N8*VP1k{~"#/bSV'Tr KUsF0YIn^ufhS4.PES"SW:˼hg./#z5.s>dI5zͬ6 -r^U S)v4Y[ԛ>7<H<+>\D//w_!ew6Bg2Eb#&hVﴱ%ljGSwÀ-rTL܈?F&+گ{ձ5Zm@j"8<é{iըWTwѵYU&("bUTrk #>ֺŠY9q ܟ`Cj >ڈXø43Cgx^ bՏ tUʞ?ГI!R>fXbb IIFz2!=oцF~hb|{7h(F7>vU1|Tv3#W:Z'.sǝo3\jXhB4, n|TEE)t\@¥S &z0#}6}S3vαvYDj/<(62.NJܕ(AwtP~ش15|b&*<$i/D+u+(z$$ub§8RazE^w5Iо97tR6lhY>c.5 w+z#4,cx~;k l<JrzۑCpJٕ_d+Qz˧И۪]; ֡N{ɫ2vm|%n>eЭ%R ʋo+!+@-.O}{ uo>&lwS[,m8A!& F\g'xTոr> . $~bJkiW˜m[I!YoMF;_xF*یH.%͛D OBb*).lB֎R!]2g[nCn B_Z!ȹhzHڊOؓ }"ZC^YXrnQ%gjjF0R^1lǿ|#Οs*R#~_OiibľeXw/X  L4*#>瀹c?Ͽ>_{#S>iIyk)n$ǗЩ'\_`?uy.Nsnd,2ۜ.| OOy V.wbźt~B:,+8RLJ[o?R28J"6}KDF uȔ>r#ᎁ̈SEy? |sV?L!ԗ/i9aMQ׿L|葙 CЀ|s mk[*<:#djw*p+#'?kݴ$~1cSsc\{1C ʪh)L+giTꜴKmXE*[4cТ-0XrԢ/H55(xtG׎J44F5rr ضE)ßv+('B=M,O-Fʄ"pdž>,)+*0Xrm޾;0xj *Bo9 y\yHE`k*'˘RgݐqZ k)+$0{f&p" :R0oa)WQAfu8 ^cG.(gh,$-zAkj\TRłXIJ? ᣕ|4yLJ^BhwRtyq {ߋi6wZߋRun ˯U4uE5q/XWy Jş芲-Ҿ8>K;c `l 0^;, c@0Â)Cnm A}ӣ,X㿛%~2xe$]PBŎqEhn&}?BmtYhHhbzHT܌*Ka _F8/yߣoFtDRԸRz[…; 4n<˺ŀN@h,bu+ #ڎjZkY4o3z7G'. :, .a# H^-(ܦ&@D!^%=齸n,μ jBYDGB12S#/vtDԢ3nrq&Cg XKhDDt_<*1iʂdʑ1AJ`;_0tc_s+g4mp&TѝX /aa93H.vC\̄D,!Q~L*boBg #ɜyp+%$YXK"Lr y,s$!%MG K ];cQbQ>[ԷgK9ril{1me߹yag! cO5rl ;k3AgohnYFF!CޞN q(F͖YX-R=@8Ce$x shF~KQi+B+c(%: Iٝu_SAl׸eͪr)ma2M6kG×v*9CTMX+~@c,(!J0t%nXI=55>mwG"5 :q9 ȋOʒT24Z'_l[O?TiޟU/V.Hrukv{G_٠]-^?1^8#H9Tyag?b*33hBczVSMǫɂ7p˼`||dʈ6fǜ1_)6!?- 'ms__#Z5IquIuuJ31[>p \h0?NeG(JFtkkѨjÆ"`@49aRhւR`6N,!*٫?ktVߎ@@θFsoA NC f[Qlr &Қ 5&<`VUHb 3r{sMCʷ n#QXٗfԴ mXlv[ޯj| +ŲoQL;RWs,#vO'F0e!+B1gE~-aL;-s Y% rm 9Ŧ!3U"zq3h !#ޯZvQoE!;v%vŘaS1g Bq63`Yb5:{f3|Eދx13XlؿY쉉(JBh’w13YxB5eχrɀ%s;YÐcaMPK`@I<'-H:ve܃t9,zIWx PnV1MW.4v?^4aҥ{kzR!xfĐ=,B+$-jƫ,"VY'Z֝ V~47.OҧTyUoa %Ox_P7SD$ Kvӏ/j \`C2?{dDp82L \b{ZK4@A''Ԯ/]i/e@! h ٜBSDǨ|ـP3ۍlA&,/Y/1tή`Eތ/E ]V;@rJG|V 6 ?/5 TS'._vvlP1?Z P7[X/j}"]3zA볪Q5n=h{MƀRHmհ |bX]&="h &`OB55?C'Ͷ6sԁ_@D׃4g ۑ\YZ$@ǡp&40(@:׃=YJ&WhO%x?e$wcb fōvxtE[b`TDmӜjSh~ `с<(OaSkl 8|k[(qM;+^E)2%k bFk9b`\VmAwa7y-BYmFW2ЀS-]kg?:P`#B%=+M;:TrGA|;`۽UxU%9oiV= )Mo?yƻ?r{@?[{C"l"_LTtPnKz#mN8V)Qݎt`lKVbX&$thե |։F` -$HViK>Z0aΈ8-Ҕ.)@=2b)ЕTTyq=tpʈ ɊYآVKҝA`T@k-N #K@`r'3 )8poJ=\~ENgjU4FL+ɣ7_G.E{ %q2( .#Ź%pߢx_@kխ 9PiO݊Y MIόD3fҺ/7grۧ/oZ慲a[` 7%γұyzʼn7)00/;}.ΑM;:|0`8=0R:xwJ0 7qsK뮢+=RD?RjiP}Mހu7 t^)2q|͞ lfr"E T2 rFByyX86 ԝ74V\D_-!^nȺW;XI]drADgD)Q4QX 6qk)vP~d05a`'<> Wm:{偤. *YRM}U\ LPJP !)w5EPa.ݶ92N%1h _rѓ RJUC.;GQ _7O$%\a8 2(ǻ0b/WoF5sZ7ZMq'G=K-@)?F宛kPFlDcift~k1EM~jwGxDp28h`u 0ϒ܋9tB{% )Xwvʎcߊ^~2 vyЁ+Qy2 >W,fB˗Y3[ ׽1..̎׿1 L@{uz|]*'}GmK 8˟NG{9(fVd@~36 RG0m NƠdОӓ,%t]8VR5 rfi"eK2f}':zx:Mk!qWE܈?i'&V'#d-O '^R u%@h4|BPx3JR?c&N@4+8]íBXV7LcyN1}\lmR~R_ ;DXLh'XሎíH+MC[0]|R׾ʣUA7 /^_ Ut+m'W\]ߍ+e }E6ɭʌB/g j?rM_mлAlQ^V&OAoy"Xj6!wcd,$9ÏAU ag5&`2YP:l:՘BD,%DP3LJ 74"_BRf==d䥿 ݲ빆?-5 ԧv:4T_i2]U ޒ[E|<-hxTW(1﮿)l҄ XzB,̫@Gj>X:pE53)j<$ Y N4LT^*3u08aEGXMq:2%OgEtd3@>;fS%K& #Fڗ}ۃXdҪ0U(pRi#{ʇԙ+D1ίy dw`:o!,;=2Kc9-+"b=Kr0Kt%i?^$#Е8`""AqD\˝7!ܳCE[Yc cÃfK#-_p3Ӭ[{-pCXD޺gidx ԍB`'dzaf?OH;#.֘J0}Ң')OBaV)mY2#乴 olUɫ+CcOz[QR&čEZ[R!s2]iP{刺O* uC,Ӏ>LT)Q'(P[ODڄxIB(^7)MYlEeai{FG[V=ase!1b9ԁ0~#0 2A8×~䋱r/1_7/`D0+0FPBB#?fIPómw70nc,&Fn%Kpu8iMpu_k%n5?s$v8-ׇܣӕx_OWØN燀^#aDvUz7_TO>NU3,EN喪KzUخ0ǛXF4N71MP(ﯮ\pu`MR>vgWkP!UWA4MJ+ [xT" fCeMp[lgvIOq-z4dp+ƥ3[ٔ ϵ$츯oyq}u׹zGq iY^tXER[/MD8]0 ;vkU5ޛTCl=gUꠏCVC0=H4i_Ǧ)_.V+5Uܝw]&Q^\J#&җv4\WomEP\v6(nmEA52$PɥAjN[L3lkD,r: KyS~5- #{FF|\³v;dZV%7Tn;/|GSi(O]y^te|/TlWT ɇ ]H81鋹oefʓ <mGcޤj':mMK`1h~]֋de-h/t-zh(H?uW*Vv shzh&`ğ,%AOPyO 9:h˸3ȋi'$^'ᲰX{u2reav(塜3cU|MD{YUfrSN+qdO%cGo`҃s]в"6(/~l%S>lJ趠 OAq`ճ6jcٙ6?Y[7*s@J. ݣ,}OnC ٱmQWSٖ@nOs 7p`~TvWN*qJȠt^(o9kw{\J)S8f+{sG {-# U2չG2!3rce Nl#YM_Q䦖v¦+x\]F%~{ %-zΫuZ]j !kB QQYl2h7Ddښ%‚,s>+dx"@๶\8x务N4˓xE>&_MB)[W>MaoOX z6lS5 U޾ȁ.M'o{OX׬x X#/[hX׼Δ~"MJ-1M`PNzno9ߜPOxe3? @>,`a X@$fmIc@L@R~BhєjP#˗HzÍ.h;d] C ֓ & va @;#a %!0a*Iq(+PMQbU :P=|Oa[ۖծ5h)ȷgǟ<?wsG#!^^ Nj.~BG:`6ٸ|M5tOnkAKӎ2Ѐ ơB!,7sW%g(H΀ kUA'o׫ B@s˜XMj %] h$Qmy`FwMyb]dpwp%WuHBqkX"_`*K5M~!UId VHS#cq 4A Wu PG (&#ou#1IUICPcoe-Ob >$mQOvZ" ɏ^͏O['TTa2RgO_&1 4¢ uLW&?ê淢i(@(n&uϊ#b'[}. h V8?ln>.@2*#Sа#.ĶR}A`ċY 7Ep@0&B1g`kgmłS`Dl[K9He`K{n'a/˪Bm-dY`ՌDiEɍ L ؜F 蜽Ȝ`5 LѭМɢj=KAJκ Sm,.́g',2&lM_byok넯ޠ7;Th+.VFO"Tt[%Sٻ^ܻdmǬ5`uR2/d fDF kKC~a-š8BzhD("m0pς!h2~F +5F@}h3O/ [Ă0*ۯkT@tv5W@gwbCq\lFCq ^>ٜSC38:r)P~Qh"ZI(iߐ;S-`l}C"Cr6~g.fR`kzll{9=}y0+LvtM !Wϔs hJѮ7ob:N NRh̰)JE~Ox7:9yЕT]Z$FjEi뙜88nNe'W1 1y8˞yhG%WsJ#[`f#h$lB%jǮeoo̻ ?p1  9hNE6̰m{6Nt1(9|mo+Hk0  h7bY\'.條$^ UT󂵧?̛9T ~Ȉoak} lKC6 0 H0$--(qv{v\F^veU!m0l\?.nI[蚉|5[3ireEmyR _ :A.og]W!FL9@M;@LM{9!FD6kS{ rZpFߡJY*W:bu!RȄ8S8yV ĸ z@f~r#'57ĉH^ MPÓ ۙGvk9:]M%@0*qzebb4%+(()S֦<4(.O!b4?3ȩT;.26Phz.7 wl$o*^B\|{ʁ e#~Kz G pWf-WA0@ʢiq7Ʌx&K{};c낰 yhPuKwM;.\646 ϾɾK15vM@ 2KsǠaL5Xk?p]?4^[P(Y+@{R&,|]Gv+)XiZߴ0PNՋ_jr_BᯇFGI;~?{[nZ舥#Pq!orzknN!׻ݸ+@A@(PAqcC >jGn5 Jzoܹq3Fj0 -Y[oh aYLK7f\ i|}:ГD*p olĢ G;㆟ &'{!VgA)W{0sѝUڔoIAg!IC:RX` M6!P?f)~OdƵ(`:>ۋEǹAJR< 4I08Zf,k;״|O# I4sÉ3{QT9"7@ƶ?]+˙"z[ V ~ w֑Q A!^DAΦu_ ȟs}_  XslR:L:Tb:(#A ikc'V:ĮYs|ccdY `c 9 \䲋 ?mr1H[CoMI'e ~^P!m5"#o#fS:;N\[gP1J/0lTMAδrv]*聢U:lfbO dd+uY쬚j/Hdl#ga+H}c4MݏSz?fXBxe|=e7\$ CTRQJ fwR: Wfwsq1]lȇsK7#$2m %'?oXLCsG\ z7 _ayϲoA3a53w)(2{6aO ͕C lֶl!'MOCq8dP :j151oߣ\TOLTMIǘ/ކ{ njZyN- `ܶ6*xwl册,E7}!`f6l toMt|~/wEO`=<2Co{L~]]¿}`.m#C:Sh72z xOռ=XU7\x= 9DBc9;P7\ \ h|Ld=<5 : EoE^{ƿq`q`se[NoQ)gXuǷΡqFvJuNVv.*SĹ:Q߯Y^r/rCnɷeoVbr/K 7M=d V{'x _ Iȝ*o ! (c~F=co[ `%@ džKM HT^4;iRh?*hí 2z# cL+jߡ⯜s5Ŧ>Bf~E[J@j݅9'ͿPJ*o)Jz7)BmqrV`_A}rD()(AHb 7z}g hh !ʅ=o$s ~a Hʞʼn^UJr>Mq)X砓ɕ)?R"RƏ-e}k' rճ41 ch"(FצFPq{]ypb_4ȿdtYRfK'"լ4ȥ|27Y+*/~ LM -GQT96d(f.jEC_jr))#A46X>8W_Ẃ l̟D/yho4ٌb<%X:dP*n3:S5nqn*#]Nh-]r'^# Z$WG32"TuD |$$&ClhÅ)u o'( | To'(QQ1Arsְ$ |Le1z +:qc¤Kbʲ&Dmf35Aw8ykn"yi掲5EP߀1TsBEhiGo83 -j/i Fe̐ ^,@LMDdjb]fy/IYt.SyxyU8=wI:LL<`лd؞BZA\4p]Et&> p)y9ܹK{-o7UPR]_K4ۍ>@$qz8Ê>\1 _?bL>HT1| $/̶%}c>|bPr(S>C*IJCl8E'9\=V2]ɴ(2eܿ2rw&A2+&^}$RtS"F1^@ʚ}%92!?F"!r籆mA  X#hQT؎ pG^4K̹ ^|*Ŕa'9 fW\f[0zEopH^1(r4&P@'oZ bC_{ p=c\%f` 2^+-&b6G& Jm@EO?~8[_ 㓙5DFWqp c^ߛPbIt70QL q]ɳz|5,6Z&ct@덆}vωRP ÁJ ^~sZ#%[O,,-2@5:tJs;Hr";49uήZܞϣ\ܬțˣxwJsi#;ꎙn;cU93OK4YfQ.sx?f!Ot)DdQecWOّz8x~+Ņ UV1BW:_+qƝ_/ 7TS_6)ұ>|]U ȣ?9 Ah;WYL*M>9 Fr+3cIǬV UpnWN#qHrd2gLo(/˺ 'K0T}=v!fDR_JH:/Z' }VNHNlX?h~;n>ʠ7gR.QI~=z-o?wT|9 B*,SIR!g+䪥tjD6y3lv4#tuvhQG^~$?Dێ@dc^+IV<2H37yRPm/ET8"0« 1jERc.r)] eي݄&R8$2tI`)h鴪KŒ߾'9 {R?'1=2v)-\!O0Ic >}447d\D)-LgJpv۫;%Io>XfFQieVr֏aG< 6׃ dmS\0QF&zǽ{J斓)F{S2sʣi<髕(Ӓ{tޘ<I#9 '5H{|т)9T~|x DSYocFMhx;bϝ~qt-Թ$82խ!ݴ=0''dest!5,AăorCuEQأ z5'FSGs;Uy@qbJlmr!O.H ;ldH?Hm,SDrri`W<-+,GClȔ̩͝E* y<9n94$2Dvj"r$ϭ;ZV.j GIrKtbyEMr?Y"tvl7  .{oQB9-dM.#'XL^",>S#aXT!;G>Cȉӎ~0\(ê u*ؘ "~"?8,pӦ-F93:0PnByfg2-F +rVc^ԅ|N&5JM F  ܑgnh}Fdo~Պp6ad=:1ƥ>B8_ gL-~Oe {z<&aS_M| gX]Ŀ=N=œ21cŏ},CMqoMR;o9=G׿oLa_xѤqNIGĨxkeNɪ-LvʌBY e)zZL'|*D3ʅQBx$zK"k?Q u9Eqșc0 _ (,+'7P n?gXz0#PJ@jlMOwP@  ,7h4 ݲ p1 D 8߿  Vz8b#š@[0e-bB )K& LC0+'! cBP+jdE)‚D0M2#T_ t9z{%8 A;iQ+Xso:i? OJ%t6vmtM'b*n%HE${FAx!ڇYr=3b~kjg-*=..]=T/> Q4`[p*ҶLMۢL"5mv$S+S<>|0ӹ$q ԍ&hפDT N,KW,ZSI q-&t+p&T4E#K_F0=XPҒB˹*Cڿu+G)ӜZHrWI0sxTM\Z ( 87{,av /PmyiՀ*JWԑJ'{7cᄣ:!s@%lU;c1fb1\0֙jPY%Z)x^:|a?ԑ1:_&5QQo6>^m1sS u^t\sxD[m(w+bh%.EɸΝ ofwtr~_wdEp-?fU_!{ۼ FʯKޘ~M: ',Y&… 9Wr.V8U@VdEpS.DT7d{l\0c*50BS]BEbN/-|rZ$6v%чm۶] FZ+c漕PpM:Tr&ٓs H 83́K:8X颹c*d%7+LLMv-eU%9L|RJ7uN[YA\ɲ)v916s@4 K.GWW Q%,!s|RVL)]*kebt h{ pl'CQ9/RUMIPg*{\9 vY܈cO,sؔ"E%djjb֥db|ޖ'EDuս5ݛtG9|j\@m\*{sY3}xt)iޯ>S=|}{Yd >0'z[lӋf0t@[dA71B2I^q|Ass-Dtw'2)8ʱs秳Ŵkas+MX%2ڵ&tLnyGOލ-Nl,fT(+'!q\jVh<. _:5*oqtway+ioZ?sp˾ο86|Q /6laݥWgluX},p>t`&ٴeSZ(8֟g`rmŔދ>j_oH:bv85;OHMi/N4K&39C[$epE Oȏ `r47OV5!^J>Q)e+p!ԼΥE5^כ+E㯕㿅LPJ.r%,o0&I*3۔&b^P92y$ӪF/4=&/,o]d_NxS*6NH^vIюγx{JM eoi'W%$!m3x* p [_Kv13}eSg?s\&UXsdVy<=*,3 Ow!PA)G^R~(ۧ)W~D'QkO\2<4{,G>ꩌ9XNE䃃Z(?/ct3|2~Ը1|4R2?CJ+oڃ^ps7{z/7Usv5{SV(e7B*%B'K#pdR0=. ˞z%ُb d3/w7~ߑB=NN|_R6(N3G8NgtnjӇYȠEo,R=E 2U EĆĩliP&Jo:nCN'͒A龜-9DBY`3jy40sȘ6A)y@QE\xHuQmb웼/2^](;mP\j"nA> *Ei{tZZ 8ųA> oS<f!lrJ"f1Jύծi{Υ6p]d&jw8gPgA4&o^׀Xe*9tlG,F}ps6t w4PMB(fz ͇v:#Q `$Dȷ䐈ig}VN(%c6ycE6ݯ}6ǿe&z(Շg5;gx&w?NKre;JlZې;D>﵇O@J!|x!{GXf}6S{AhY^J:vhn/RVUݒ#+?"]͹.5]@71—⬨/BE;(>m*(W\_U/u=>sIh=^% x*bn'8 lytc;: }}_S|*4Ɏff)'ȫkǼF 6 _ZY my˅ȥ]}uU]LvP1&?d A0]ZN=T= g\roK;^8 =DB6c&7#)W$4Пk6* =فFRn#d+1$f%2fyn{Z|V[2WBP9bP pFFMX0l+MbOH!﩯LFaȼkhlxz4"3Wc<#a=ac8,k¸hx9Af*}BM'_ΐ-պ{|āDx9qYH;qr"N5#f 8]":S8?Fms9ױm[ ^) 9u;LK>  T33LhM`C,M:1$t)Qn-)bF)[#*ӄ&L byC 1I=nDhqT>iiRmқ榀#)G^5MZɮG(Xlsǫ4 6lj*[4<У([,Z8D58FVŚ},mH.ӀoTObHцuQwTШ<A,FA~1֨¾.aFVh޴5.LˌY;w#G[1ߙ,jp2z̪NYJ_3A-'eʞ"(_Jbɓ4?-jX$=oipA z<2h?@2əYGI-HQ %(2tuP_5ONЏRz|ҟŲNטKL<=J%k`&?`Dz Rj^_.V[iʄVrE!h>g v*l# '.k)u 9gWXWG<jpeq$LS=(I#x:-˦Yg9??$E"X^6'パUx &" Gth-{lP  7}x#{s} }FqX<ȻFM!wzwU\!1e|YeDAۮ+`93hKfQ:{]MQ%4LpdDžٸ6r0EB]mԖVSG~t57'2j>"UQ.3x= iq]!hqw9ײ: Z:Mϣ12 nlD6iyRkkb D8'1خ$nE橼3Ȩ+TÙv^Kjd;`ۦtm;ꓶAAA_m OgMs /,!B}̉h o4Mj?c(h]({ٓ PECC[ۘWs0aڥ^;ިyes#$LбKUQ7m~䍄W3sHbj;JF>8UNT$5dɗRd?Zf,Z+kN~7;$!֥pҴT-V+iJ=g7!ۧ! 8өz  DAƈrQw)V>q{4;Wي%Q W9O8P4֮e9c$QBsaC- I4m=ß2011n pHCTi+uGMCBiW}`,Ћ|pB,a \b~NOTH#WXd]L80'! ]3Y>*xKx!*o(ϡ'PiS=+UZQl4gWӒ@Sl$G ϐ9h]鱀eO@1_XYayٚ3s]&,h5S,1{yo $;dWsnQ$Vا/>1d-8v凊,M 猈܇n&}0MA9sb!$A8Q!QK4Cu?4\6!&uΡWA4!lm9\"Y k^Oфvmi=5W7[]Q2T"2Ⱥjw=:6E6!1t.~QsVكmQ@;ӺK.!@a`_q{T˟_vX%%!':<5>`K,U4ddd`J7JьYJ/Ofc+;mm/ zpC9$hI] eBh+Ό('ˠ ҕXVCbn#$&7ZޱnšdWb}@ :4bZ$ !Z eI܍ *Mg P?_ { } gdCO=YNaw`ݪ`R힄t X,,Ȕ5±sc/sn%w IXq/w"9b8AӧNOGZndQD_{$.7Y {ߎ (ﮇ2IO-ͪE@/"%ɯ7pϿ u6: LNfZYa. < ~r}90ýu x@D_"B ϻ = h% tagDq\}/ˆ)]/n(|QZp? Uz:, Qǘ1$-xO$p.*sGD{ ꗿR;=l єi&Ef33Q+S<Omk{V2HDgjSRQ";yH+eN7dQF5D qTBZfxۚUf V{(yy'cPN"]2o!tǦIW sNdE}2@s!ȸ!O.Hc^%nůn*N=YL9kpA {ĥ3eqYvvQ_68)Y4.5y.гn(qe;RPx#fϗ--[YXD ;eVi8L>N"M= (za!4p2Zt%ȫvס:`ABCtf$Ba рSF.70wS'qócK3[bbY#?-V q0 ZDOQf9*GCU$z+⬙bxpGg]iL :#Ud4ΟBH*xLǁ_lA$@İeU56%B=pu]V/[6 kѥiXMɧnCഇ4^0c.Dcnnȡ W'G}fO0k7np|\xW]kUᄀKykԞDϾ\*ήΛD\`ԓ%9ћMCd^.bۿu2M (핗g\̓.%=l݂y`ki49޳"Ϭrfrun`뢉|ʴٿd Ǒ7i5XV B(_ 2hAu7 }'- <\Yi>a݆ 2M]D.x-gU:Qg rHu/"og=nmL?qtAY:\۫d0U3^0o4VRί-v7O|$7?iamW[W.68 SN@QJGNE񠎣cs#~͂;1NA[r]=c>/m E}HXMZ ـm&$*5o“"БZeB`8v"U7S6"u p~i#@ܲJVzl)P\0I`"$0Po鸘W&56]e3>|Z+ߜ`; snaA?G4BzŽx?eg 6z_Y_붶QtDaBmhBbJ#MeNf\mȀ4 ,g¸sM-rE3ʊ&ޔ໛&– <]8=O$.)r8{6ph{6ܢ "BRJ|IثQ&}쌏 Ȓi[h,(8ͻUwp;h~̥ Imh{tWACp΅p6.P╦KS3y]LhqnBF612^<H-wҪUNAHoóeGct\6ݔM I70Q QPH71qF>S ]0O7.g a)TMP8] =P !3.)1q7V&H^f4hSdrfr>`$E'NyO_`ΝہszB RؒRc93pU/@.ׂo=F`5iڏ>;_t%LpXA,Tfl|颜_I,UwH-39pa,5e(O`\֨-׫p=kFɾ:QZtՌjU ̅ԻNqʩ帚k^) 8u&X`|A>m2Fg;R qh;Sya,;DAv q5晧R-nd xǫN͖āIP4oۈЬe8 Aϐc[٫K =Ȗ9gYV HDDO!MHH &"R)o)I&JfC[0 BP_`4ð{HHq}7zI}7xIÀ)QؘPJf]䲳2k=Zv"ev CO@6LKe{@;|*~,%|@jXx6X&Z1ۇ| ^HP{Zd3T+z3(ioAh "8B@7BHgdSwG8Dь|JzaC +rA ,DV'K|PZ-D9&J"piTnDj֯jn{5AZbM&z^ hG`:{x,E$A1\MƼ\=+;JTk7<{4{ 7;=Dj!^Z )@skt| t \䷖gGxWwWu3!u@Ͷ ;L֩w;+38]LX79U/v濁<+M%&a^,ǙZa. :¸zSN,5P' _*2Pl5^_9@RM p[G`dV $zOʂ_Pabm9VkHTy@tkk,D8k1waM|6E@MS?tě[ۑ-s?o.<[$€ې~7oIie%6, xxEi6 x9wɳhu7m9^mk=@hsLW;T]lg{YXo7[|cP J<8g-s|7)ͽy5MޅuN|/uB]oX¬Q mv$Kk 5s-o}Ot,vt^U8oԶն Jw 4!)l2J 0~rma6xlW2WI׉h+O0t{N`.r>N,BL7w"":GdeiLSNyX"fCx哈E1&Ƶ0CjtΦ<_,*l;'tO:3p"毿e}]N.axt7!w9ή:XB1g3AB* 8[0,0`࣪or@7 O^FMyCa?ga B'w 1-"3MuY=qM&4pFH8k3CN&?IEdMs p8 Pҥhޱǖֽ+:Bޝd '9pOG⯒m_L2q I N"I |nG>x 8 Zղ Mj\D:>|l@STU6e\,' tF:,ZL7Jڎ7 ;Uoo2 LWބ!j$9Occ)OtqVhh1't#e8DT5w6A5 2LxEr +g5b]mg dPND-I-CZ$8nɿCT7[>dnݝ3mB}R♪PeJ>رd ݣKcBWntmWH[:(\ϝZ=hFg>o__;d*>o^-[ßW*>IaǷH=AA<-~wШ}a+0,bJy20nɦR32ʦ}E@&s2a 2VHcBjɌ My:`n N2և:i #WQYWV=pWrEm]kf-gFZE,K]ёK\-d8$RtZPdZDݝU{b?N괮`(6kӹ\a"[sgF)h _#ڜ'y3cz_fAXnm3 (1Ps-Zs7u!x-(ƃ`qflsoѵRV \~XIl҃>paCc{*6;e%29aKwv?lv eyT U8h4OE vEZJq EMJ&{35_'gneM8˦,X <0pqjtCWq:YB U< ]-y\F%z2Q5{ͨc?8H"^N2nSS \NYXru۝(֪X\܊gǷ Q7KwAVE݁j7Kw&Z$ЩP6:x ֊{D;7E0df?5ȥ[NaR3"}ev,mwQ-j1i'(rVdKh N[c-ݥ9p¯9Mj;s:`xW(+mJwt\9{P?*(-0=X1uQ`,E'S}npNi+Dَ)-npl`>ݧр+-xlaD,ǒݤSHiUzO W6QׯnRr$)ry4.uҶU8`R^N"<HAc] aIN<9 nUv>Ӌs#[[Ipm`:: X(] M&w$,h8CS>٣$fEA=AIP^(3 kn[&9uv$0=O߈rЎq)<ɻ;Ii6fONpDŽUb,/;CxHy,Yjj@@`)3y}kk"-iRM[̼Oヺ=KR7AAso Ȅy?RɃ*;-F3-tOUhetn*Ocu 5,ODn 4}1.m 1f riKJmEsp-KU׶ZWΰrc,Z4ZEvGyjN$uS-)VLz[ UR&DЭ3"#1f ~7n3M1%C.'(\9@^.[1hq6M&:Ŧjבo nߘ$hHwtS#ݛ-+o1wSHo߽qEƢ%__hйm LU)+?E03tyB Y֚,0)'ѧMrD12tfگl,vO.7&nڮ,)O]dibX)EYCD~-1n`ʧj [=?K;EqR Wc35ÈvDٳ-+ඇ.y@mMm2E/HHĞh_û95YP&0w[Eٙm#cͱnGrz4npI96d*vFKU!EE!u?^dG=s;CK{L؎{s4@6=aYOD!~#Q>(Z4`OljWdO M gWnJBq%+,i"gx_3{p8?_}DC 0.nr~oǵ?bfs9Sqto;p~\rL[mCt9lGUV $Bús96q8c\-Dޕ?.8oϭ"ZA,rھq-N ΤշZZV2Kc,MMζȴjⵎ?rUG0M}dyNo[oxǒY4J~۵ΦƔPU&wٝfUDpy'-}vtV(wlVA)9m5?tbڨ/^Sእ-An$m#i&d-sNm;$FttJ B+"!Kgzy ;CfpOQ v11Dc$7l?9} SOU'䃡?Ѻ5' ֜b(c5'&6d@dl=Ӭ_ݏhqyRd PnߏkqL:.0j9\siz[r 0?D_u,5d$HrCV@@.d2׊&rPLg'K̩EXZ$QfZ R2ӶY1oFn4,!1&M%o!G!_{n^Ũ VUx$^:\SR@+beRV>@^b3iX3&$ MI e>x.fQ[s5}йssE+ӷjOOWCw^O%aN~L}.4tv" ج$<@ϛg.ѕ%y/p~Sx w>˞ LsORIy>~+n* )D~<*)Xx2#ZOT'ZVtv:bН"ZAr,(jsOP8&7qȇ:{RբtτW*7&àB`꼔 rbh؇ 닑QZ]ԻaAO篶&e^G? #̢axi ,u Uɾ {Z4?zT DTy ->y H=,EE$jul%e& Jăryjʏ٤5ﷻ[T)\I($|ݾ%oE[ΗKV{>h˕7]_c>*^ݷ`S~akC_/ .K3vN2 \s_RNb1ᅘ*@RM쪖{&*|Jñ{Ü~N}mxZ(v}OΌ-_+I%CKKO:q-TC jO_%uۧBǁP6䗽kY.EJUeE嚋zhRgQa:Mx~FWE4  5B'VU{2㿺-y!~g93~Xl.g9gD{lȃwDgѷK̜˒Ns/YzS/%yJ A5|(r2G?H E;dž",w{"M5 A(iyfH&@CXPJ;w*.k |'mj8q74 wϟ8G(?@ QhL 0l 45țL%ݧY+Ļ kT0/&Qz#斶 F"(#䆶 !MwB;ڼ gԿzHCq"BozMMI7t0 O$e_e:w6|'~cE`Y=!kS6viܨCaɒ4zi>;ٽF}/ǟL?ŝ+f]\E}E+c%cC6_}SguCIv$0-+oMMM =D 6|Y ϲb& m`Sn24i`SSԺK%lj]vl颦u},Xٍ'rW^]oGAL/:1D~cD@G[p !BئJun7&^s24n_ؒ4ǿE\tHrn`čiL|K˖r +sRLc ,i˅#Nϖ1z%T&cCbzxHp;' Y-f٪W!ߤ9U.]BT>_X:E5lAYC3lRD[&rZ̵QIV_+DVy-{a}V?eө#¸.D@i3fiGLXgZF} ;zZ]gďwǞ43+G8sdu[MWka$х <ب5KPR]gb b"2B!brv qB⟈b23Ab7v1bB1) wiHyGA$փ!3w2Ah9]*mmvw|k\ڮ?i5?J7>B̯^D(3;mKS/-O~7GV*=ptUv7 X!hXxSw#|̔( #,(?c_k>9M7H*7>Rjz_,$Y\q E 4:5WxC\_ LD)5i=\! "d[ŸL >mD>dLX0M;!q_2rtdѦV\W46bТ#UD nnvdta1\q {;ю-gAѾ@'Zt'[Y.kq3\;2A8d`XmLl%<[I- 6v60տY/`e~; vj&F/հQ׊ʵEg0ۭFBhkc#H`;a66JYR7(Axg 8'?M]qv>41xYd'~>Pk!pAh0T[Np5{Vb*]J(^GXل`X_ΕYC({mM%I&.nR/ g#ǤZYm T؋-+-[۶.\ߴm27rWvTg _"nυȶHhgdpPL1!(✔ [dWH828_[M9g Hw4-jd6|pMW|2b%R-qg7? ?D; IKB%-2l<"?MspP ?x;gyz!mLSMͶDՊ\vl*e/?܋$Y䨮^-`uZ !G76jr1btk6& &XT,\r {5 L wv]tWU*KU#WA `Л)T\'Κ\yY>T%zgW~$  +{cqw\ND˷OtGpK`' ƺ$p2G{geC|Lkhv,eݏZ1; 1eB&@2ENF _N9JjQvtNhPe͆ݼr-vbD(mиsx5uxwx:;mGEbO5\ަZަmn=aD/uɸHKIxMZav|@ɴĪxTCʵI~9OUPnV\r$b ޫ ,.,\Btݡ ͩ5wo﫶G8خ֕wa|Tib31Itu~wύSr`6? 58i@d;Ɂ~LSIFL+P[DfР˫0˧\M@[JCÌo.cjP%@߀C%BYBɸyCa[9Fفh<.Lm{|ԟBDڿ&$2\JQBab\\L eDUateuSz:ي1c!L[9̑yVn9U> .ie,S"3oNj)|tzRY'S&Y?nBƇTtkL'T۴TkZf$w10+U#kQh*Taəۊ q'C3GmzCy$QN X}D.+R134"JUyMCP\z/5|Ή7&nDu)F9h3P\gGwfiED71^JG *аҌi񰿕s8HR3s^HWB@_nkIwmx^ B^QkmG;{՝F@D5w|);J;K!Yp jcsV-2Nst$_\Ќy|չw6]q3u$lghcuɲ>opFX㆑Ai7W%,-̳}q J#{~pY݂z/УFFQI?jWWlI3Q>DaFB "n,nF] o/t#bXF@dKL'N $'6یَ+2#3wٺ/1[{ :UkFJjK'己Gا ]W-k|*z0[Y 6Y s+cٟpJdɢ!Zg Zu'JcZfWnTw,) `7j{WN}u66M&W6^mD\:Κk/:gLln'<%"`1N\]*7RpjoR񰀖{j#;ꃵQݓuhewO3cJo0,c|_iϘ;ɦ׌;Wk}&5gQYL# s7΍ޠ\ bJo-N%bo\dpHc@F1A I1qKFsm0J +$#޺}HEwh=º2#^S]!IY~ ?M-'b;C ͕DI]@dv !tɻ*S m_(͡/dgٓp H&_`kʷh_dZ(ϏS:qW#o;|s=b&?#Bt*3"g?#||6>ZI'<"z$%w 刡uZHn8i> SBF3azxL|d I!Dܥ{w;oG`兓#pV|஼Sp 2b7C!&qv_㲤蓓!#CSd鏊觵@mGNj?O|VWYOo;/Ĕ{-ЩAw/ ilAo;N8b; W;1ҟwnN$q_Dz l}QZ_γRs'a^?ab=rs7>+: <6a},?ǑrEOZj+-n{}<O"Ro{> /?aq6m%:ݏ#brba3|;NL/YȉGOYHGUސ$ ԑ"uAeŧ1&GbXxU曎 d?~ C0)8d >_QJ.֞Q%z"NA_sp35M~'xJ/6dUpgȎ a#O4J,ܷڍM{֞- >ՠR`aL#T+2G*_ s!CA-wX,@<ְaTHW#덽]Mbb Y 5Q.C83% E+_s7EqONvT*CE:f6 hOȃUyº4B,]@RW/˯:0iʦeMzWX&%PnZ'zO2%Ӵ3E 7E_ϊ:󮉷3`N1g>mtD_*zеV&7a,@PI|-p8g qtsn>vZAA5DH[FrWySQ׃uyoÀA3W"K>cפ5Otb~:N,"8#40ugf5/ayC\paic_Z Bpޘpi -bX!廟 |wCkC'ySFK*ɲ˪s6lSf]0lN_ե}伥 ?0uACCpP'RsR|K; ZSq9b7u "ڑ5תSsl9"ϑ9+rY f>X:keg)me 0zѼdI@j6sDu1 a jdɊ> ⯋&[ [S%mPLoŚ5]iUPG-pf+&m@E(p]EIGfpU2Cm`PY<5YEbDl7 m|]%-~PWdƸ7x7z@KCV#jUUI+4ȋu,G)&7*|IjH6Li+i)&iY1N ZE;S8?>a>+CaP3ol'|f |-mzh2A͗֡}&sqr &0QU\Нe홃hzt "AaRFwCۋ+]@cb:".@L+ d3OB8^Jos=H7>Zݦ1Z-wXyFU)*[z"^hQ6=`Bd7'krjԝt7]F"A7&yK8zi`͋*x.^}#ǗFܝk3vya-?Uڠ~BhPm ioV!8op'.\CiO M ɜYtִ7qu޴NrJ{Gĺ*@]) n4ٞ39Qc;f * @ j>LgDQĆ6ZU(7jgΣVTğRr3qX9nX=UԘ8%\js&bK}tɱٺ63itKOSp\3Ft\noI]:xHZ'n5s؞Ll𢲞N$#=cc皴;ݦۋ^j>s╚q1lm*A0E۠2ӥtt߫뭼"'j H]*60,۞)ߞfKl'#.Yb& _yԫ^x& zW3}"?CjYGYTmu%~tw\~Gxv n_o]Gц S|9$qE_O@> ;tXDQn|P+eINYagg+rBO;^BWP/&yj#'Qps;=.ޤa_cj\gmHAsP0^ϹMB .XVmp)]^ieRW'is݅d'ҙ¾ҝP7j\s^?aaߋ7)UN:m|3B#}){G-tuGRZ@L ԙTk CdYK,fV)}WԷTm4ɔ}[_(%umwv\^= |7>i::ua^`? npq\4ʼ{Y9bof=æwkmPlwaq?l%Sx~!+.n[q/]kc׷.&b-kg~`dXmz$ʮv}a?`>ܰ&uy~(5)jM춚٠<1ZX,%ŒY_!Vݑ0ћ|g0ٚν[ď EG 7?כwy')+Yqdt8M/^u8x )OJ {:Q}*4l8@2^2LGRV^GGEE)-2tؗ(%CVޥʐ;e/;sN2xg4H|EpwZtU-y0e T%A բtwlq:vIҩJ[+b-ӨʶnQeT):SfrYUhR@%=\7^W+!&6N:M\㽢Lμ6~I{]caO|2^v h3V3JQO N97+DwDGֱʙJx.Rawm^?t\֑O{:VGA綠:b*@$wBV sOH/+ 3:Jfl'gp!l&(&0NwU!#Rk؃y-LFZIdsY?HuW_Q=_om $X0 7]xsEy:4ݍȎg$ce/܁8G÷AͦY؅n3|lc@;-d0x.YForNwR*9 LT+7 D]޽/-.\O=.Vu>'*p1 Ο4_)fR4ZE?c/Ps)f|8Jhvld \N/a 62Yoh/3]q@Oq.;=ϻ [ڇѶtj@pIp3Zk~'4ZՒDCvIj#XrA^4*!k-XPZ4~ lfrvlTF[v4>m>v﷋gxlhS6!O2]rCti?cw CI틹>œ2u"? +~ߛ'qS ;Z1ϻ=L3*.L3GB1L*SK O'1R˗v0,D,b .B(XާKj Y >G ^O嶛e+%#J#hIPullQ\`遜^CC3ZVD Dׯce{~g;TH3@>>䅺ِ)yP/&֒'=(6'ޠ-َ"V[A=?e!{Hז\/OƓy%c@Z;$@SpWB|o$oo@~G+n51 ob)okG#,6p"f _%&wO#slvkq'WQ]^OX:hi*Pru8/OV !ޞ{\\Mk684OT+X[\=^7]]6=TrGXZoĊ.0tKWCGwUtiFU}=df ?ںXU~[ΰ|`҆fJ8 ME,WQr嚛9=YRvji ;ɢY[6H㯤lV7k~@?#v@zjv6)F ܛ,)3 ς+2o3:5 vox`cB*R)Tc"uCVfWo#N'xX4B nxZ;MZJ Nѻ.^ѫ/tC@ /i{],GAU2FMdd"SȵOa`Xg*c+:_/'oXoyXX*čUO-H&DN܎@3%h)S_Pn8K@4pDUInx36p_Nhh'!]kdAd~8kc<$0:YeٻP:*~!#\AȀAKwKY:XT `M$\B&lYO}s ruQ*]ptZl5'#ÓgBqw;{)8p=b}npqhl/Oafച5m|uɊәЫ,˟Ů9]w]Ư/UTe+FTf?vw|rlw6(kk2 #sτޯwi Pu3"`Ix9|"&{lxIĢI #ޒdo>%m$Yn='[.@XUn)kOw݄LClp݆'`8~'* T׋e&2/2TqN{@5O51"h=,f$`3iLw ~7&:*ʀs9`~D-DͫW)-cb(ed~Ёv%=<X:dFz?4H%WH#pCC(ݥ\IXC@ Q-*\]+z@vFn{8 Eй.W|OsH6 =V:bzyKns|iYH |w#χ|\'! wV_ =U>B0DB'|?r^Y 7ݷzcYsp뼦Nί~ݥW u\_t/nm; pnΙ[[9Hw4VՠZSRgi0):Dy)-vzuHHa}%#$bSk~]@0./@nҁi+;{%ÑDǖĤD9΁ EEfV@pG<="%C 0z{(f ~a$3L*EQT}ݣ@.EW~[q? &cX f0Pb(^;J0Wd)Lve+ S~^(fWs4S9zLY2AQpNN2w.WQ WтMB䓨i'}飬j.n/]ߑLMD6!ûV5pn9ped)qCSQWDz7N`fllwA2 ]ÀG`eC>IR )e׉`rb.KyWʹ3h~DZT !ٖ 5FjGZ1$:YE7;\8@ 榍rǤ> B7fH ] EhfKCoaxDX1nl i]$oZ1"~4+xD. _jǯ*_C׿*ԡб AOȠ{v:2nۢ3AS_[roȱH17"IuC(pt7| qO.{t=!J(kߨC2 $ʁ#t;'e49Y=pJDV Tv|`O7Vs[^q-grCLGY^e3uYE~ofXz,s(eP%md_2Wc|J:I+(eodQVH$Cͱ.z/+>aG0 i^8O.2zRvN)dwgg7Uqjb0q)"p D+Ɇu+xĎơl{e&Nu5:"/qFIKgsT1QQkuvc{ˉSKvcH=q9lp.d;0T(N'Vc7s7m;,`) &db-D`3&O; 7~}DIC0mmO86uP;=,0 Bݝ7+l􀃞{g]0u}jm|-B1a%4k* Դ̩Hjq8W2'N OQe9 F=O $o$(PiV^FlK7ScO5 !;AB0-B>qoX#0-uv"DSCv+]dnhYHs/g!)9 *% `ghH@!# S{6L4ƮKj}&oILN}X_QtK9l쬭MgrB`Xn=@5-"D=#n5QRp*pDfS[ğw:n׫"Ϲ760ʻ)eʗuIO-\*KikՍ 6`=M-8G釋:rMDNIWOIEj/0uV]MDW֮*L"$~5QK׻ؗ^2"0wK͙kiZN6UX~\\N3X':RAx@V-iiX8s~(ttsX*1ӲhUzte2x5Ne~J{45b<%ˎ2CCxǤIfLڧNy#W8)hkA7>ÿ'm@ <(3@_ة6 vy7[-p.C0 bF#=RHGR'YƝZnBX[wvkG\B M#4KʆRQ@X+{xUR8O gyU@ Gt^oWY_m.okywbꀞg`Q֙d-XuBmmȖ? p8(ScBcS|֬E/ u&14OtHW?b!16A~" 5XTgc+|H]Cq .蘊|ס|ؐc+]3GΔSPE ¡tIUa`Lrm0b꾯=4UiMcVi C1s#rn5@ '*˞2} 킥yJ&S?IuZ۰YL(oߏX?1#'!/%({5*ttx)vǼ#"FؠM?!Zٳ2| qsRcUgc|qN?S0sxx RK`.hȃ=4,g 7.e~B4µ8bn8{D.D jSD}Ζ5"ߘJHŗخ+-VLBTRưӢ܃` .ߔOC;Qﶦg'pb>lq$2?O+(َE {)xB*VYRQ\'θZcrWgyiol/OVIMc؏C;<CbR9|X'|XsSv TR`9+.0$cqns+)Ǹ]!|yqP`T p Lbi@~1Av!D?R4XfV3]Q.7IrRHr.\Ii?$ T| "S; , ?/+ϭO7L(j50]߆o4.(GFI;67ٿȨR%|) N|7=G+R!r v/fPc X\g|20Y_f. Dଳu&ӝMNeuU l \EmYZh,Ϯ 񻨃\qb>PG*QyV7<!>j#2C'!d?%!J@gh ܨs+xtw`Ckƚ~FyF2~ܖ1y|E>A^`] Mm2iC)~^pd .^Qe))=bە/A"'wDr=MI*^\X;"L0 0DAJycpdP5 ^xW ,34 cО l-cKT$Rm:tl-tb@O1q6O>q6O9qW5y:?ӜNYSRBi| JB/Ye'^(5}N9Sӻ1ҒڰcﲙGSh)q۩%-i|Vϛ%4d^r_zMagnK+su~nH+]įOc`ћd!yO1wRڏO:_3pSLMb>s!G )pGV yVcE8Y.o6Ɏa%Y.{QZXn^WK'f&|BhUi[HG,tl(L[Y7-IXZzuQI$Ml"{ÚZBD$`T=m~Ш$=ݨ嗉R;LN;7| ™ G+܁ ?텪:#DŽS/`jgD u:bQ`tpwE7a|^@0Yu#SW@?{DT=PP Ma@/Csi? dP4Pz}j&POX?~JqE5QT%Gi:J } #ŐU6,VS9zlVF ᜵;x(vH:i*zk[9{N^P tr_C[,xܨJ'&GSg-S`ۋ<Ǣ> LG) DzTR63)()S [vwEw<5Xj"wڠ sWW+~>`ZC헀j>{NbIh3bM .9*GY{!?|`Qp7Ӻ⬠׸9mՎc.y4yTZvxV_%? {ܷi?t-1Boxh쨉:y>zg]()HU&:_>/!{E]-HF:FiWucuvn30/fo OYa|rV^aBEHZ=FIFz7ЈIv:c*r%pvC3!.ʼn%@O xgO<7xlWz &`vW<+ ÚVP֞<RjU!bX4|$=$Z Ed\8 `E ;珈VJOSӆ Gx Q#3_IƷ(wȬ4*P׫T:&ڝZbdWh0xD<&{7'&4 7i`4Yw~-Uη'9>)R䡷ඟ3E1Ғ@wv.r8'Q D\t%OUj`'ضfNKL0/f䅀:\t=w,?u*ɘPS+#ҿ>x+2Qf~ T0=/)B@rيљBB4a O':n׉IE8=Oq.c..Pq3t!%Oem,IYɋ ,^R9|n"Y'MԃxZ{V[}l͏9(O QCUzIgH- "n\[OJjKKfPэG?fG;j9k3-.NPa݉+Ӓ$Gz~f?f?(4X[$ZM>(:(ܮ_°u>9efl~KTA PxeoܛipZgoIbr^߲p-qv4%g+[[6k.,:(3OFZK,& w(k[r%ɇNTYd"A `NI3=|Ll~oyeYdu1$VcC9>0 p%%FQ X)WŧRvk !+k\˯ST'l#6vϡ7Ot/s&N[R%\3Ǚ R cs[xv{)MovB*ns0 mfLL^3D%C/1oLoȺ~`a ሤit<÷B {ظt1]*ɹF|'6Ş9~!gB.l^bݏI2HD*m1}YWv9?֞" e45X,-N]|9Tv-ïL~芼=ݡ)x4+5 = " ư=Brj,B%Ccxvݭ9BQ&e0d5Ƹ̎c4]e-ӟAjMc(3sј_]p+UhL<K d iI_#TTVX@2)D_꾂L,{EO([gEI铻E(;|o2]/TGTt7j*!0.>`_mY HNQy?| Dy">VV2kI_ |zc/Ӈ!.L6,Vߪ) g$ھOd.d#;EPi$3H\S7D'#Rbz\I3[Qnzr< a0h*ǵT-m*'1ӡS6|?wb&paK@ ێ*bvхx!U*H'E湞&y[zq0*}/  z Xd/BJQf ҟx0n0eBC[b\ "CKײ1Z~7ϫın"C[O@fumoわ`Aɕ>1yX VsA;)#~kqUP TwW;$< 1vކwQ[b.ri츞[{3׉?,HlN}8si5xh_)3ز~-z 959ηtېѷ( <(Kz/p ܦBn&}?v/+A7 P"V/P-p:$gyǢ%3H)ݺѰ}r18ai^!Aݫ6[1Һ$ qxWiNeGT*"Rтy<ؑMz,;)+J!X*K3v(Cc<g?Q_z2ŒK竦o60Q N2E UO<3G@2~-f\Is4^aqe԰F%.BszYS'IӇ(!w_#}_l}Eһٝlsl73|쩘bA-s&[6ƧK6+DrQ2H,1 4I{!5j؅ʨt;879Qn? n7dlNu.7}O`j Q i\+`pOq,Q; @oFL"~G&R:i.]1=XC"bσ=O4$<æ/u +Bv27[lOO1N78 ya\Qem nJR@&-c-`s>ܯ8R?ao>&5Nve7ӯˢ9+& I_Hn7R|Lac2(Z2 ~iQQYD?-ÈAfo+ \ݎffDS~|߃"kHGiA{.HciP8;f0^5Vv0 Blڧ&DEY!2QЎnV}c76Yb?V(dgL:?K&V~gMEF]W⵶uL7Z2إy)IE~#ce)Fә#ƔlPC.$rh&Z2679^Jv0%Or+OTg=0j eqXaط<=khKGqP !=x\+CcW/V9Zy@u~ш lBbK#N*24u¸KIJ+pHWl~N$PӦѽts݅ڗC{m(Ɖ:3挓 .A ^Tfbek(!B gH]] gg8"'E]$9& 8#HiIWEG&$Pבp=x6lrlX> xvd v8Jauu mbGsx& q5ƍ~0 eRqB)D<6by3J,h`ay4$m2 l+N2` Z<F("vR+st(Í6&2p};\=WQm8g#dBS%4Yp-/@.$uX.ID{3)9ܕ 1T$Or ibN8^n2}OJ3uoLREVdYz(A3'ɥA~к̽-QP~Iarޞ"z#K[t IK"%#)/vV,F QKH‚^Ü%/?."bPXcOG))JepAےCdV;8 S; 0P\Ɛg](W*_f\wAMZRW};e2*N|3O`j:Ee$V͕}LJt(G#`ʺ^p}{s<O0lLn_E_Dm(-a]:v]o$\+78eK?y>'wk^ Oix6aU-D/ʈdtK졀:^qrx5x>Blk~?[/;6ݮ=[] =Z1 RC=SH]8%D?$Fa,zHt P!#b9 .GEFir?>Ig& jZ{V9+OJB2Wu#kخԒF24xhs tgD ?JNFC$1v>?V]JWtD ,ĭr8N[05F]ҀY VB,@m2P8 H"SА;O-h;b%c9r>#/MRNHeIS/S]zhCbaseBΛm#3{;<!d7V~$x$e컕%_ Bc34g=vQ e7xXk\sE:5Q1KbrlY~y|l {Ӷ](*z=އҽ\߀2)?+56{?~ZBbiV}޶jʏA[s<sՙo+wR;b#A kxg J;r{Gx=ر7x| ث:5άwy*3F9AP\.2Qc 1.25 MB 0.6 MB CedarBackup2-2.26.5/testcase/data/tree20.tar.gz0000664000175000017500000000172012555052642022540 0ustar pronovicpronovic00000000000000GEn0Fy&gUu|!Wu]9"nXY?nW<>qBZNW)z`ʪƿ[1- L_uƿeW`ː }hgzYpv؅!7?w+ _wq^*Kvfxkfx]3<.5XZ$=ۿg0m_AU_H/aM#GבG ?_ØM0+9K{_yK{_yK'?g_ W`wY% $/7n c,??_oܦ{`w?_A5k\Ϙ)"/7k[ab|װ+0W0|_J_9KvO0G!S%X#ۿg0mO }K ^꿂$ C+9Ksۤ?_˜˿{O'O)?`wo&}&Yߘ^P $g'K_I_o&qs`w$z6=lۤۂO9%濄[w _JIs !zCedarBackup2-2.26.5/testcase/data/tree3.tar.gz0000664000175000017500000000113412555052642022460 0ustar pronovicpronovic00000000000000;AA@}܀nqd1y<'OjM%*>nÇTJ)4:ZjkoׅZ~JjxÏrߞ>}n?_o>ͿujI~c#Qoa?i9-㏁3#ew =k߁cAoao&6Coa?h9={P!gAHYP!gA +?-Ll'? ?,GCǂ ?,GC_Ko&6/ς̿HYP!gA_HXP!߂O~RYP!C/ς? ?,KC/ς ?/<:xCedarBackup2-2.26.5/testcase/data/capacity.conf.10000664000175000017500000000007312555052642023113 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/mysql.conf.40000664000175000017500000000053212555052642022466 0ustar pronovicpronovic00000000000000 user password bzip2 N database1 database2 CedarBackup2-2.26.5/testcase/data/tree4.tar.gz0000664000175000017500000002164012555052642022465 0ustar pronovicpronovic00000000000000<AɲV=)A=+] ߼J+]|RhKZ1OA _AEy qAa ~ܷ,?c~-^6Nai9~: (+Av^?«&rc}8C ?[r|YZC8 DYqKCxcwlR16pi5p=Ʒ?*e0@3}~-h>BZ"!J9d0^<AAq5ʁvu.5cfSHz<XĔt͊@=<,2}g"hH6@ ׋gh5s?_7@^#LBS-ɌU{8JEeD8L̛P<`"/6߿+7«-90x(V/eQ)d#М Ϥ/{2:y끈r``Y},⎈Y@WO832T>p`O9<' L]B.5JL(ք=|B$@qm} ,0Z{0v݁pvF+-M-JpDCrKLHJ XO^[X0d4@*,k&X"g\ڤx2*oK;tLofI&Ƴ(]_ƫcffK~TED\kE;-4˔rGN Ӈo/a9d-PQn K+AMo3])8/5qZ/@bG=S; 职۸(3=)4.Dx*@y\i\6e5>c݀ h{.,99Vt#! tl2|$ބ$o7;BW{'21t \Ev%_*}>! E)'JPF<  \)1 цEb]+7 h]{w85j}n%T"7d+^r6Z?%u缎qI N5MQR6M`ŗYvdtoT:p|LFE< TI.%_NhGR @P\8%}hOIHD<ɟO~1_xX)?9:w!"*=B%/b%@3G| ސ#wCwO:C7*4B]>hJ\.M A;Ϻ܇1j+HK >&n<Ԡ̅|fsr Frl k(|Ӆhե&j/97~ o{Jϔ;_oUϑ;^$?wO9CO|«z _?`> = =E \&m̭fj0*Pf`Ig_,sbnK!Uє2qҪfޑtqPN_+40"!D(̰[f9XaGwvRnk 0zI7(BXeG! +EGL]f<8u|$| @^2kEȺa7ln˜:&4w-ZjXbU+??_g-x~: ngzG8z7M9ԩF,@QV!Y%t9-x"{v3l+0$eE0/l>??j|Rn;0qv!Ojޣ {wĈȸ}\sv4G_H%~9،ؚ*D%{yfikyĐTP,6<5  7}5E '%aWX1]7B-5z7 U iQ`Ų\53LLۙ}#"_%dY.Ze-ؗ5@Z֩Ε gY`Y"SՌ)* 1rgfA/qih,+5sK:,k9s83e<'# yj. #Yw۲J2k#3;U}YSf rq<\1֋/4РhUeUm WB{ب. }h?=}=$z.fߴ&8W~˳"LFjLt:tõ= bEzL^=X} 'Y\L. i0]޽slZXrPY·ByxYr+@8C{u;_IA$gLbY_ۀ~6zmpPUuv2( C%!1- x*u,~A7"i/i&Ǖp,Sf1PDvB EYWɭ3X͂$61\Q̕w-EG'_O!g-|!=?d3~K~~8l$=zVC$("R AP8jm9ӆo۠Ep8:J6haZH$7b> }VGMbGajEbh<8H9u Sa{> 6X&w-^A<-Dٷ{n=Ϛ *0{j֤?'O;I? pGQc =oɉr$XKEgW'X8Py$@;,hR6IӒtt |}~6Wb}7;3 Nx25{T8i['/M=T?Ϥഫ:(4gJ*G*VsgYN-[Thi:#m-A\-:/F 4SWzbM¾ Sp *m?߃ ?f1Uc%Tɳ=TiK9lvs J|$ LtALtr\b㋢XHn\SjAE)B!Ψ)3qs3lnKGQv0anD(H䛄-UyqTYкFOPtϭOrGL*c0ۃ% y2pO39TʥTf3Ǩ)(MC6:hְV%**^/>Q42d! dVn &.FJh @Pʒ4;:T._?ASi\&9> քGdA%BO=ԡ RL[(79jUG+?1?_hTd]gL$ظQ/o! hŒ__RW y9GFu|7jO@7M\O.@ӗ=ﵜ~DIrmZrdJCuVШΆqr:8ڑ'0)4hXC;u1a|qvQi]srQBtd&jF)0KLb^N!-W ֕uY9iԝT]{`ɖʒYSq*LtM QyUf&TkY^Nn͗^D^8UQ ?-~GhMHE:cb<uUs5-2yL#{פD/N4ot-"YAѾe_?o@0Ke0`)hrnڕ#8}V"6lЪh@TӮ7ON֞tsHShSLLֆɗ/ؐY6)+[]Ў0[Yf 8] MP %?37jrߛRO! ~g9U1H7Ur q= 8:Nal)he}I]!lŧ?kȏFvDa;Aؠ`fA5T*nE(ϦpEeMH!-tE?)}rA/dS[xxW2 =:&Bç.C Emr-igMui~h*֡ Z™NPh0BCԇk.+dO̬ v%[@0>)2}Wo!NjLɹӹ %4 ExɽƓ.`rϵD 8ZݫK*+1p6Q:"ORp}ɗ $( t]mgѮޣq$9̧*]02ZeKGIG͙1Aⲱ%z\L^"NY3mQ 2BNwU"x54YKԳdryWwie2;3YuN*1`$N; ȲB_R%0 ͒R|ʢ1ߟ CgkKT~Ψaʕj_U`hQ^yYsT zfa^Ryt"ʎ,q_Do5mETzSF\y$l ~Re!L1/(x]z:xhCPXiZuΑ{Ӗ8aН. t ^ yn C/8ewS8t3C8`+XIL,$jCrꧡUs-Ѳx65 AFhT M^7r`[xe?Q؛ kr@2é*0L7#nO'g;I7Qܭď(Oy~ƿÿ-PZ}Ȗ٭v+ L JacFP554xkmд@Q%ޑSwaTԎpQʵ;ܸ)^$vXy8 ~xU q6}?ڻ%Y+ {W%{_;L c:BRH|+,d&hXl)T@2pXvٯqvqnr-YY``d 8̤zhw切QX񥭯7:fxҴ^mfS;d(\A[3s -Kcћl"i%u(Q";|2˖ol2XG} 5+hځaCMMgWF ͳ)%*&G(ڟ*9K<[w@k~[C;?B #'D-4I|ZJUޚ&jHx+X]+bFxin: g9s ݙD\,e&O[ 43iIxN7Fz`MGg<,;(h_}>=DX/.X8#1z)Q}GƿO~ĄkىR&Jf{|N 6KPhf 0p/_=vټPşӯ]L*Ksÿ0xGtJ z2lZrO2pd;[ `{1/Hۂބ1TnU0\<+ff3n_/5 Y*Ҳ́_˲L(g0l3'7{L )f7w+'}oo#,JݡNx5͵vN??z)%$^aKE1ӱ"Hܹ!mw_CNhwlsFWXUZ3]7ƧVB$W aعnC%xڇm=;, sMI\a5RUϿc\ ; AFPhB nE6w㛤f=!zYKk6fث[?mǚZΗջWkN'w> y5I:by^Yf$!qհ ێƱ Lde]MW}2lJ>aşuO =*'A$'WݤU&yV$&;Lc6iݑ \@ҧ@p:Uv4"h 7[9:g_ LT) N5 y%5=1$1<,5NҒ-,>/~d"&P1'dUJ#ύLJ13] p>wI«=e%s 75F~N1{SR۞B]'Far<.lI'' F(ks8Dӣ>]|'xq8M5HOID-lfDq6W}nhGFD{|]e"/Y|_4,u<ڑ;[N"vDg, -d%#c mS)Vi҇E\&NOد~myN-CC K"Z52`?.4&s$D)̷q.!U .e^/WzۛS"GC=+@lPe1x83Fj\6= ҇aX1 !yynZnw? U"p3p`p >t8>RKB~;fvj"A,L3va=u2Z\P"䞎H_[ O']&x,t:Qǻyv3Ɲ{`6[J h *lU^ fX㮖zWBvnvnvnv7 ˳ACedarBackup2-2.26.5/testcase/data/cback.conf.90000664000175000017500000000053612555052642022375 0ustar pronovicpronovic00000000000000 /opt/backup/staging machine2 remote /opt/backup/collect CedarBackup2-2.26.5/testcase/data/mbox.conf.10000664000175000017500000000007312555052642022263 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/mysql.conf.10000664000175000017500000000007312555052642022463 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/cback.conf.110000664000175000017500000000044312555052642022443 0ustar pronovicpronovic00000000000000 /opt/backup/staging cdrw-74 /dev/cdrw CedarBackup2-2.26.5/testcase/data/cback.conf.170000664000175000017500000000061412555052642022451 0ustar pronovicpronovic00000000000000 /opt/backup/collect daily tar .ignore /etc CedarBackup2-2.26.5/testcase/data/tree7.tar.gz0000664000175000017500000000026212555052642022465 0ustar pronovicpronovic00000000000000AK0.;>z6 _@8ЄDH79{ڗJD\JqLMQ aRN꼱qk=ݺX56>}^Ke!sD?PZ`QD[x뿾4m??cGU!d?s}(CedarBackup2-2.26.5/testcase/data/capacity.conf.40000664000175000017500000000025412555052642023117 0ustar pronovicpronovic00000000000000 1.25 KB CedarBackup2-2.26.5/testcase/data/cback.conf.200000664000175000017500000001222712555052642022446 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. dependency example something.whatever example bogus module something a, b,c one tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l subversion mailx -S "hello" stage df -k /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 /opt/backup/staging machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp /opt/backup/staging dvd+rw dvdwriter /dev/cdrw 1 Y Y Y Y weekly 1.3 /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup2-2.26.5/testcase/data/postgresql.conf.10000664000175000017500000000007312555052642023521 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/encrypt.conf.10000664000175000017500000000007312555052642023002 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/tree22.tar.gz0000664000175000017500000001401612555052642022544 0ustar pronovicpronovic00000000000000sHǮȑ{ͧ Y{c{oO?ՒZ>-̜oQI GF$E 0A` /D _@"`α28MWWLkެMe(| +G˦/@1~??y?A!40* se8$v 馍 #DYR($`RM*NJPGĉ1a@u:fFMIE zCTa\8(Wf=b1>T=aKfJxpm\R9Pj('f?bSٚ9ےV"j=䏱?zb9!ese`z팔5֥4I٤[7s';_^f~u g.K,!tYL13Ï_!?{_Amʮg1J \ "ORvQAT!CsAؖÄ``:#p ZǔA#?9G? 2.J9,D ɯ wq$V5-)lλUB\,NEqZJ1β뎊WǤ,7{|^@' \m*+^ή({7b ]t#R+l86##kӉ3#C'ZBk ;Ϸf{-$c ol7aKuH/uB./T͞Ti]ft?b{T!QDӦLȅw6Msn0 ]kaaȝxoIA>;hK8ʗku o`O ž<&_21I@]cw%.С*4jtuE(IvJ?Ɉع;J7@0yAo " Z1NJt %kj[Yp|O—;+jZ v毹4](pIdwTSO^T2z &DQv3Buzp$Y;v{n*N>ezIv|@`NW!rEc8#-{ 7&`Q~~3]V X?n}dV+x4' ܯS" R ʩ2T0~<#0y x5izq!~Kb!iJ:MM1W=Lhԭ8ivV78p~o+`~ki͖X5RX$-:lue: _NU:қG[y=Ri2c9ɥ `zXI_hh=zdI NG~w?A_|t_Ti~ȏ?%:O˹bThg 0L W 4բlQ+&,ʮ17X${2IπISY{k+n3X7l^b@Cz<Z.J=&\Oڗ;77mzD5 C6l,{n=4!˲H7Wڹ3ka^fED Xw]"wN"Q:r5jT6°V&kqҟ,3m NҒ;;c\eΈϱXBa3K(^8xWu˵/ĺ;O|άǎK`Y-ec $]`PAl{ gjy:;Dl&H~ʆ#&Czv,*FGDl^ٞ17bu\b,Y>d&ܷYՁg%gEF&"l/zQz+<n#ᨱgs3E{DD}ͨ1Dpt  %2)j @FꭚYK{ | >1҅^:m(T' +#,VH%>5Do0Zxe6rr/\OlOokDA2یّPau?P' pjԘ2}X(cRO&ӹ4>@r.jo!-'@Q# &1&esһZߵQ rWD͛2n.W"L[N- R괶4Q>S),\X;RINv<#W>‚ (M\^#sW?c^'1Y1ħD_3ǁ?m6gl SvCa5nݵn3j e+ġ7 Zqz+$7̴M[mN2#W@:7Xʹg`0%e#pdAFk:&AF$CeޘJiۘh f}CjɷD^xGo(] CJ9mh;Kzlj;Y>1#+yT}+j{S+yw@{ĕԵ _SfWtJ)X~z4>wIk. ˍ}_>W u7.&^@;)s*">e*jXi}ffbpD Qf^qcjNjV rS]}0 # M. bs z?/yY:pIs%SQ榬:01lM<.¶t蘆ÛϠ^Lr1r,itAtήkd(:3%c jħ2eW_ 3+RW65v:jlƄ&\^|z@2/!Q?c^ (Z# Yi؟×ݶ$,ZȮ:ie_e3tѥ 9;Ea0Zξ%sKx/VQDBZ6BvI1覄ת$}7߻.QnL@ie"w{g({=ĮKE c e>sC̪+/j&dHd 'dpw:Sd 2qSo)pd͔%YMJb$T4bAqR]𥳐X%r*8t7̮zml Gw l,!n\vE-01YUhvYga':-lOFLCP[e} qʢX?X\ĻBS gIcEHl3 Xo,L!+HB"@xis';c>]MgXI~YglZKfClEH\ w vm wc%Mx 唺ei]}ʥiR^ Y$cJY D9GL:sS&f,d] Eэ0&0cY[!Q!Wq IkVߘd4H2{5EV3 O%7,'-8V>]vq HEo^O :E!40B9Z!Ǐ?\V}i|+r"TCxоC"edU'V1[! D9z&8T34!;z$[uGݳtYMن_b3Q{^;~(?!tlڄ@"=z NQ!H N8 ^)Pژ!fϚPg3gs ]:5'WNx(RkQ[RM;_ 2Ύ"7̖3հ4fn&| f;X U_J5"⾣i&gM6U#Un饫Im${8f=*QG۔([\?:1VDQTeSRS!GvKI}???Y%Eel0`_BtsGS$D 6Aղno^%ɳ)FGzn^FJ3;{o}xƍ( P=of(:!jې2 O^,Ӕ]ɹb'Ge1ګEI뤾rϬ!L~qYӺC8K^WT•(!'-ǍE~ܧ)-R59՝SXBr7O04\ͦw^Z.tK \h pJ%*}5;ՔoE3OncFÐi[ĩI}ɖщcU 3C9+3ON/]_ylcSHT5ET2f̹H,H }pKI2CfD+5FG9]:]XZhTP zsR -ijH0ԶF$B?XD5,rh;{Ys\c ;爚|9!V?t1"#ZtdBpBpo0?ȩFwRt쉡3]Z)MWc}tVzTǺ{u6$b:y4R6m'\*;.lj56[?bƈ|E^Y_waԳ㪘,-CludvƤht~$")l+SJ۪H9vĴh6C<<Cx/PA?/?PPC?`P@ut'2&}7q_s/i^? ''^!O}?xv? >ċ$zo#y?@ByG`CO۬Zkk?f`Q?~_?|abCedarBackup2-2.26.5/testcase/data/cback.conf.80000664000175000017500000000372512555052642022377 0ustar pronovicpronovic00000000000000 /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root 1 /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 CedarBackup2-2.26.5/testcase/data/tree9.ini0000664000175000017500000000042012555052642022035 0ustar pronovicpronovic00000000000000; Huge directory containing many files, directories and links. [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 2 maxdirs = 2 minfiles = 2 maxfiles = 2 minlinks = 2 maxlinks = 4 minsize = 0 maxsize = 300 CedarBackup2-2.26.5/testcase/data/cback.conf.50000664000175000017500000000060212555052642022363 0ustar pronovicpronovic00000000000000 tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B CedarBackup2-2.26.5/testcase/data/tree17.tar.gz0000664000175000017500000000174512555052642022555 0ustar pronovicpronovic00000000000000GEjPE)E?' n n!_IGJTEebP\:8yS))8<f~^oөqy~rEAXnp"Cx߇c^?Ax l~]k+;]c<7eE Xz;;ց-؍ Eikliv6O?pmc?mL_0Ypa?d55`'?w#KL'k]`g>OOS+OZ_B?|?m!_6_?*GGpϹ¶_BXgs+C!Y??H?~  \!Xy9Zi@%p'Dֿ;۸iW;6Ook?9{BXU) 9?WZ uZ{rW`g-4`'ǿ?V0 a=uϚkT -?Gbs??oZi?@g_ؖ?!(C(CXgk7|HZFb\!Xy@lWOM!8ߒy#CedarBackup2-2.26.5/testcase/data/split.conf.20000664000175000017500000000025212555052642022451 0ustar pronovicpronovic00000000000000 12345 67890.0 CedarBackup2-2.26.5/testcase/data/lotsoflines.py0000664000175000017500000000112012560016766023220 0ustar pronovicpronovic00000000000000# Generates 100,000 lines of output (about 4 MB of data). # The first argument says where to put the lines. # "stdout" goes to stdout # "stderr" goes to stdrer # "both" duplicates the line to both stdout and stderr import sys where = "both" if len(sys.argv) > 1: where = sys.argv[1] for i in xrange(1, 100000+1): if where == "both": sys.stdout.write("This is line %d.\n" % i) sys.stderr.write("This is line %d.\n" % i) elif where == "stdout": sys.stdout.write("This is line %d.\n" % i) elif where == "stderr": sys.stderr.write("This is line %d.\n" % i) CedarBackup2-2.26.5/testcase/data/cback.conf.20000664000175000017500000000007312555052642022362 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/cback.conf.100000664000175000017500000000162312555052642022443 0ustar pronovicpronovic00000000000000 /opt/backup/staging machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp CedarBackup2-2.26.5/testcase/data/tree5.ini0000664000175000017500000000043212555052642022034 0ustar pronovicpronovic00000000000000; Higher-depth directory containing small files, directories and links [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 1 maxdirs = 10 minfiles = 1 maxfiles = 10 minlinks = 1 maxlinks = 2 minsize = 0 maxsize = 500 CedarBackup2-2.26.5/testcase/data/amazons3.conf.10000664000175000017500000000007312555052642023051 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/subversion.conf.20000664000175000017500000000046712555052642023525 0ustar pronovicpronovic00000000000000 daily gzip /opt/public/svn/software CedarBackup2-2.26.5/testcase/data/cback.conf.60000664000175000017500000000176212555052642022374 0ustar pronovicpronovic00000000000000 tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l stage df -k CedarBackup2-2.26.5/testcase/data/mysql.conf.20000664000175000017500000000040712555052642022465 0ustar pronovicpronovic00000000000000 user password none Y CedarBackup2-2.26.5/testcase/data/tree6.ini0000664000175000017500000000042212555052642022034 0ustar pronovicpronovic00000000000000; Huge directory containing many files, directories and links. [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 3 mindirs = 2 maxdirs = 3 minfiles = 1 maxfiles = 10 minlinks = 1 maxlinks = 5 minsize = 0 maxsize = 1000 CedarBackup2-2.26.5/testcase/data/cback.conf.120000664000175000017500000000136112555052642022444 0ustar pronovicpronovic00000000000000 /opt/backup/staging cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y Y Y Y 12 13 weekly 1.3 CedarBackup2-2.26.5/testcase/data/subversion.conf.10000664000175000017500000000007312555052642023515 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/mysql.conf.50000664000175000017500000000046312555052642022472 0ustar pronovicpronovic00000000000000 bzip2 N database1 database2 CedarBackup2-2.26.5/testcase/data/tree2.tar.gz0000664000175000017500000000034012555052642022455 0ustar pronovicpronovic00000000000000;AM081ʂ_QDM'c=yK2+W<9''qZ3&}8mKӸcù߽+,/?1rMRk|?2e W_Lg GS=k55TD-k7ٟR(CedarBackup2-2.26.5/testcase/data/amazons3.conf.30000664000175000017500000000046012560015323023042 0ustar pronovicpronovic00000000000000 Y mybucket encrypt 2.5 GB 600 MB CedarBackup2-2.26.5/testcase/data/cback.conf.160000664000175000017500000000052412555052642022450 0ustar pronovicpronovic00000000000000 example something.whatever example 1 CedarBackup2-2.26.5/testcase/data/tree3.ini0000664000175000017500000000041612555052642022034 0ustar pronovicpronovic00000000000000; Higher-depth directory containing only other directories. [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 2 mindirs = 1 maxdirs = 10 minfiles = 0 maxfiles = 0 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup2-2.26.5/testcase/data/tree1.tar.gz0000664000175000017500000000177512555052642022471 0ustar pronovicpronovic00000000000000;AɒHkSt&$RdVɌ̈>}Watm{cUW\pN|ٷq xy* iWQ(@8$^$ /s?/?/u[1\v?~O"~ɓ2AQ'hG&^^?0in/PS u|g]얼0va|0l戧8WӤ[AVxl 5x` y!o՞f}V%"t%;e4 վ|,2.85TI̛kװKɫgpSM@sEurSݴ&Ɔ HgM<8AWw2S_rQ)aHjkr=r067 #z?2[Gyr-0 6me阤8\2Ej"y't#VئlH2Klj)^I9F ^%n*ZkTX[{m&--8ig"Kǃǁ1I>JO<)8 ?g?Q5`\BPx'k0ɶwY0cN˅\-͡boSj:8ږa]!St3ޖs6|5K5cZ2DcSfh>ZFIN{+.B5+GOx?ĉ`oN<]eit#G%zÆxz8m4S$ Tv]@'ROs6;q[?9(MG2hlF @ڈK?T[=:)8,Teŕ-q[uC4fi|v)scxWkso9ʃD}i2U# }mك~5LrX_ 1 㳹zR1 Bi*[tL.(غеnI2$u Z>&jh1Z,xs?G<@A -Q+(CedarBackup2-2.26.5/testcase/data/mysql.conf.30000664000175000017500000000045712555052642022473 0ustar pronovicpronovic00000000000000 user password gzip N database CedarBackup2-2.26.5/testcase/data/tree6.tar.gz0000664000175000017500000007050612555052642022474 0ustar pronovicpronovic00000000000000AǖHsS |pZZk#LfN<'*;m k\:g__P  ڂ@ ]C@e~2C?Uy~_D䏚GǾ^ao3G{{q?~#?B??Uqaۇ 8t?pރX0G#UܧmSYfBdE6Ss=:"a؜)7nkQ?M\O+s}3YxhmK/*uN팣ȳYCVtPͅ_^$˼&0S$1ō`{cx ({-rѵyD=?{cbз>q)yzű\sv Z#~< aH$1Ba4f%7A^(8iІE&@jI3˷. D&>-"ܫ7mCWm"mOHRp802+6,3"K xt@1~1쁅F6(3!Ӣd%eGj_,#b'DjK1GޭG3rWwiYk2ù@inKJ3ۭֆ g &=87u,X (5 mH^e<ZCw'[< tn3=Hf;i[$LfcaCn$ v\èbjj!G,Q|3))hJ@FJR| eHB졪i$H_F B/Ւ ǽ&fRMy{.Xt, ]@er¦ yUS n^8,M=T—&U {A)`%[c`GU֮42.yPEZX/[+a&$[K]ķ)zLQ CSYK^KW݉2Ng٤ d 6؟ VQ(>D՘o +xIjv@[qA+J饺#c*1@s*Cf4[siFW_B ^x)r@\KPp鬉h-ާ12?0dHٳ^Lz qCSnwE% ҉z|V05Iь1TOh>1w }XePv(n6_}]qG߭&a3eS$'wC56<&: N|<8||hU`RcHr]IIMOfInҼ(&caoJښa2LX؝) 5M?ꋃp -@|,rJ,"N:9~Ln~:7; E/P9+qE5lK7pq; [GS*`M.̴[,9mzv^]9$yוb} QhFJHQY;;y%?(pG~Ԩ/Ǝ> gkFLC:5gsm9Lӳ~$݊"M6D͕vG|[ Y%Vu~X4[G< e9w@v,NY)T]M$_q-NCoň(C>!*Aw8?$98D1ܔX-+nR2|u9?[??n}0HAK0ڄFb@cFOUql3c{ff#[`eD qᆓ`1Y_H  YHP'Z4EBG@{*iŹ/3|~YGȷIy /C{M#PiZ&\j,Ҝ@xT4bv:)k64*6Yl.L!.MF]_-^WF[vɑBV"9d)ز&,l"z UESlTs0 uAuM쯇J/R::b&GXjp%y;BSOU/?L@lR(OwB<2Ϳ?D_=|~s~Y9 +eNI >'Ji\DP*Vؓuq>ө@tFȇhYaʴ`K.GZbX'[SȧvQެkSK m޹;} >urTxI'@zj'Z[4;hoBx(m{O>oɳ;!9MYǙNU>܅sշMd1]".c#q}p][<P*.!8IW΃"5 P Xuh4}M6-5*dg.ݠX#G7";O+VGjk \!BP.OV|ce5(E:+GMhǥ74>;Zuݒ8# asnlhMk2< ۷])U+RYUIlx-e $Raw&~̓L#S졣1ױ;w0Y -! ϐ%if|rK(IŠΫDC"m'#1_Ȃ-s-^v`WXMėL W^@s֩JIrp݇J#^@ e@jIsC{Gwëy['9YKKMk2P~ܠ /+l-$ͳ~ne#=Y9NkM xES*h'|DH?bj_oQSDW(éy)Y!{FbUyTq0ý;Zi'=NxqC+_̡8oP{"rUڥ43{ؽՈy{DJհ;8RBJZ";.Y>QarOlܓS tc=Ϗۈ~OnZ~ʷNʟ_^"+V6<ʉUEؙlyB wuxwd& 2 y'@@Zu& c?^𡕑Ex3Ǒ_Ddp ]V,SyqDL]Glɿvmm6Ue!r̈WҮ 첡Y>j m8]Wv9#x`˖3  Rf6:k )yZupiB:牗@ wKHْǢ0@| ]ا!J9Mܒ=|:2@W' Ru4yR)vnCN(;V[i:xk@TY[9m7K_iڊpV>KZ849_ KCk}DG3&h-3㨘q?o'w; B y6zATw YZK <#ۖe;zLԥe|4.@zjz rI!9%;+wc9yk9RB=&ڌzb?hpۈPN)hj*}Dwḡf!!`zyN5htP!;7 ;ϝ^ -9Tt@c<w2n[N[n 'KVS 0мC+AU.WJS(7ѱ)%o2V>Eh呭\EQDt˰)0yP(}ze =UQ\\LzV"|vom- ya)6V Gy[G)z 45<彍HՈ4`T&q\Hk9n4տ<~s7`ګ?ɭ0Ue MyWϱ\mPOop}F>?g??Ϸ)tu vƚ!E1uHKt@oοNʿáe/S/7~ ?=K||-麟K߮|/3 '>ytߑFi:>&Kn<>ZH=f<1Xhsa0&) ϳI'Б nţVFȋE5"3ю h̵_ )Ar # y1: 0o|ҢoŇNIBUoG r)}2bBB9viǖxȏ^ٽ(e&uFd߾ .F,ܯͲc!i#|t"VNR\rx9LbtSy;l&i:+wNqD0!rT`Q>?Ģnl"., ^d)'kQMjQR)8TBXjvR0G;'t< T{YŹ唓m|[[ֵqF`J/G2B%^w֔vI둮ͽvG侮kDO֖qO$wCT9O؍NثM;@W!Uy$*aK-t T0]?7R@DSYㇼRWQUcPn`O8gB4@5~lɓnEB#êmsf3wyf}>Wvq,IL}'Rp!o %p(i|m$|*Fjds-&psz ?"9 @| !_ ]E+&D;^G3ӰuDwj%A]Wt}Saglij0nH͸̕\17ZyDK$ҧnK\.)@fe@As|A3;)@ӡm÷C^"^ }3^2[c4 wp)ř ׄR(dxp?_w̏%̭j+zwbᚢa}SLO2/6u4N(>!5{a\"ؔ^'I6ĉsm)l{`L8%z'#_E_TDvK?2Şsjiֶ#;|ҭ.>}'YaGfgM2RL3%,Ё5 & C]@{*Zj تzKmܯpD~\Pbȧ $AʕdN<ѧW]:@',:+k? ߏ8E3o(g/WNjP_/UVV0tl|nsv{"0])J[CܰBñ hyn~]T 0 8%L|Yf#T]V+aR3I )G 1+}lݞsML; Ě=SV-NQPe3\8v4A1l%EN DV/P[;# 0Zv`?U,&{7xɩ Gxk.,C>1\n>s~dYڶ&YYfʧ͖L!Tolh(=\ !p[{NI2G/?q[>?i`*YqWP%{6@Bcs8^UŋRxuf wc1$FBW,<` Ŷ56d#GuXfM4hM6P k}oAȹ+HTV^7@5>aa\U&Cڀ4K"Ξ6eHC @T!8Mhqۓj{l(XMv!CȈhꍈ@p7]\yAlg' ]!m;w˜uz@Z[W75f3U ?;@ݶ`~}L$VȗI/aEX&<3/\qeۉd]/IvUs)ʬkX0F`)KfhXc\o]q.2" v2ޙÛfl$q$7{);+Y0'bbN~>A`/*n^jk uT`XE2p>֬<(CljvTι`~aIbV2+uM TF:ȏM\gq.86jbf 2wRf >$z$ 2rix~+?3?-*~?QdK}7Eg P Bh)b;+rkGބ-޼o^_o1?+??cv_k?.?+D ?{oF~|dOW?߂/[%v?wWs zbO[gc/ \-|^ɀQK%Y:WP"I|uZnHkdy7ƂOߊ.o2_B???=rTE`(c0 X %zX`{' 3+}oeEt<á+/*T"=1!a@(똜}dozyy_Cǐwp#%F.5QN8:ѸW;KTKNSK#"Q9镛X :J]VO0Ť5ky.IwNf$C:PK-q> у鋭.7Dp1" ,J7!GpNǞU:ŅBAE療?Su}Y݈L ʁiWrtQ ۛyطFtBO13b <ie@Mʙګvmavֹ9ބ{fL(S)AfSxA`{ ]gܫւ:[33N˞ y[kzC%Re3q改LP3cT!s ]pT{/S($"@̟OA_?h >i˨1X4(+";kqX- O`@/IkҸFCdo^=#Y݈Y_B  3P^!<-lBPF뚺o6u1OC?l8x^{:! 9QF5Ձ{ޚrM]xaThmOUmX_gFE{%+scnmyhmcJw=jJ'l% ʦB=*'a.6-BuEc&Qd>>{A$ock0 s'O7@^wӴftΦ |ZMQp]; TFWiVyVٍXuI ҵi}9;%.?5MrL rVT~P oU^9X)1]a+0Xt{44`vNDdL{p03Eb]@!e^;/3r$x>`bsXdI1YkG>HA%Fh9[_[ZH "tUNjxս8XrdFkFS'ִS={:"I_(5Fg<ląW \%<[7 WN%y? s}ňlT=>U: :,+]XRIrB,$c<3D294l&EzЏJAyz.?߰j(3>SG?w$`uR~EK} 1Līe͔2}Z6mV`&#EI'O]d l2y8GN+t&brD.2w~U<ډ5cS[g{ |<=WGG_GUߕeiYL,~}S=FȀHSYlPuX e8;nҞ746o=bccNUޯgV |k<>^(@-a~@~?=' I9ղurAύh}KReq9gɵ ÄqU[[L`Ol񹎋;Mn2=+MX%8Q1 3 h~tDeg݅ 5r" W@ցfOdŀwڔLl/ V0wdިc* hSA"uaǼu3@͊{xW~bTK;pHӊ zbץО,A;]dL1O fv(}L]-f]99TQϣ6td6ى Um"4.4/Bn"}9y ]rYأ ;Clm F_}Ĥ3bR^C_EW^~=QQmHw-)Wi>>>Emiax㗝+ eb.1'{xBP@+""%a׳:3#_uQ\U~!TI0ޡyӲ-+}LU_Q5 1wbuԏ'~2~5Ժnj_] [ᷩ RaFxfĉ.8!4Xjc 1[lZ|_k;j1AEH٤Y#[mF`g,W4q|:CVN#|gĢNa%ow$0"@%t5 Z@_hE]tWыUAʼ VOP.cf5w5X W+Z:ΝA gg&1c; ֝@Un0# hNM _|pex8.;_ةmpZgX8܁@WHhvCP EemCI '_m\a2Cuܹ'2z*g8lPk[o.73 zh#E"F\>?(_KNub-uJ }1>v46 W>$nџut-,p3@`wKۅxg }g\ jJ9ж^tT y-Vk;IVr7HRN3xf!ejUi x-H*hPPY@@8kj^J(0(~j4:w^?$Zq{ح[QIWFTG7͛6 l"ZҞD#_²V 3LtfJJI=psצ n4˶er!v"Àƨ=hsr|~ʰ:15}kvAjdMc!lV r/iTRd׉Ⳃ%Jsj͏eƘ1Tڕ|pʞp;yd2vF/FBtHo7/SwXvGb=YEAo8~]ߛ/9!'/[~]ߚy:&Yxϧ7Kz=w}L/+5Tެi[ #8 WV/wƯ?oGG?oO ?wuy3-"+Ci+" Cp& L[JBvk6:AQ-"hw =.> C1dbhpL z} ooFA@ub_|oȼ~#M)ʑ\fKkL3 +Y\M0.(zg%iNoCqiUԪ_4Qrs7dfQ2n t<ֶ;˺kԢzŮ>?h([@}u'Q{ !FZ1E&Vᵱ-G)&%>$xo9pĒ#DO%(8KmvlEt+;]/CpϬpI$VR 1#4`E$YDXo/Q,rcYq}Z-mm[s*A$ENؒe( c֋ܲˈb[\(-4Ж FD#C&bgj65U}u!l6M'ȯufhҳC%6WԖ24-2ca&e1۪]:[lؐBjU df5J$TD"v^N8Qzmڏa?Oh`x{*7\94q'ݸ6\kl)9r3=j{6:P\`-Ăo֢v4$rT*|㲼r2]![za 9|F zݨݓv0B1Zہ=7&'@d/u|CxJXGn }ֈON_Dwp +L^2ʭ)&!0O]0fnj%tЃBҊ{ELe!#ratbk%eB\ f4r8} vQjK[)=A%@&4đpT"u: =VD[y3-T 'z<9ѝǻgaLG)|g¡`uqO6;+CC >zhʩ>П~~ա2 DN[}}!LrU# 1Q:bma ?o}go"xmꌠSU_2~ O]/\ 2Wt)xN-w^6V0.Lj\(x1-E,~ s4K"1V^Wį;e$h/͈?Kםpኀp(ɱ׎3kߩPEn~F%RAkoDY$7t ƺ{F !'iMRZe8FV]RUX"qnuXN_'k{oEs3B:djt3r.`h| 5Xu` u0 RM(.wA `T~ j@Ys -@ WdLmveZiH@9;qG;Z1,)9n= בwRS~?~__ yÑ솤}ҡs wS(XdNB}PZަ_FÛԓs$KVKuwX;#vUZcYjYCNdsNAu?fX;Srm.^]{BX\AO;|ϬLf`&S8Q& [ו\Ŧr 9V%m |kf*䗌Q^{elZ[a2F'6p݅~ct:Q6^[*@RZ\S&8y}mepJp!ᷖҷRq}roQ允 CB],Kž3aZٵsB|#? gxF?{ kCAwʚa#szd.`7Xv'̌&kInk#փLXf<h"2s>EuSר6tj;^Ǒ, kE.=׏ ( z] O+A$(N8!}Go ;tE `5\ۖSJeu}3 ]D?3=?? ׳WaPZ١iPg-_Y-_Þoy3[\Qr8ݴ^!* zmFAލ}ʨԺ%Ẃ%X3Ԝ k2\/}UF%~M͹TZĠwH^)fZ^^mnDMސhŏM%!g!Ź2R~tL]Ҁ-dwdVi)TWDb~( Q%F-wT gx!ElmNQDJ E~O:w!TvR4ƭs?L](l iLG׎d m]W yÇ sϏzL]Iv?$Sj]TN(dVF^Mg>颮 HMKEY{SSw8d=ԪcNz\$*H9z5Ȝ:& wKwe*O5%VA; Gk-:vAX4:JL9hจǪo V)(G$6 R#:XFIY3p$@5e{_F>nW4f"ޔGӴtvԱ|Ȣg1חE>OO E^Bټ14SV8ScD04@Jj$@O\*W| k҇`|/M M@^0ׄ_4#;,Y/g˥%}UVb 7zfN'%Kf$^tx>;=4`q2f۩OR{.DQ8 b k d鷾y3يکQ7k7JA]ٳ@H:G8-nܸ)- {C]f :%%uˆ(vZz=}zl/"?^dtV[ NlT&'d\ucJa Yw7iy,mrL|~ma7gn-З̕= ƅrE bUraȿ^*&4χ\=y*υQljw·swE!JӢ `i:D8L(%X'ݭؐ0^1:QY6D:nΧ$]&?-"Տ4Jx'ӁmQ$~)IN)B024^ oI")?N%+3-E|wa9Ū12OnMT6.#7^DBTݻ%,D[O/ '~ 7~`_rd}qp:)m\w:TZF,陽d=[%.+̻aҡݤ[8Vzav`gwQEſg#$9d>;6!l3$޼kQcˎUbrzEb7B%AM\E B8?q0RQj͡[ 74,vd N%EǞ n&pZ hvN8V ]~DyrWeDvd!E Ksl׸UhϮ#w9#IrP%xԚQkyfhL&i 0C<.eQE5>2jOVW˦ke8æyGŚcmGwjR6bʐg$CJzTnTnMf LK7gkh=a0R:*by#˫z?)6<(ՙƝ qTХWTd2+v@=UJOH_)Ejא%0qϡI }6#nwd8~=QW8xB:N?#>}u׮;3{MMTp;h78Hc)꫱'},)Q7Ő S7Y%ȞP1FN^eZJ\s]; 7ۑmbCG}r$OA#mcPi2G=z-]5&=ϸy!MR rRSUKi |2TK.Ot ܮU5mLԠezHigyzׯ =`)} [d ٷ]dRRtHadq&X e@ w6>pH~$s{Ia(]մخnİ;R&DϯgPΨJ@޳tNN'VTQ a=JH)JVcH<&zg-AwӖ13~/(OAzqnjŐKX)rx<i;nmX1 [%H˞#<߂.ePy*e-o,*7~SUm/,scS1!.H`ZtdTJYrpc6=
    䣛-@V稍!=$9kk>k"lh> tvpKU(&&3퟼7P.5ID(dtx}m_ΔedA{'YGx|0Ĕ=i)jc ߭rޜ _ZAx,7ݸBw|l$[sagoQGŏۙ}R ]W/}2"OWO?o俾ߝo/9N 6G; [L\Y JRrQ~p}#'#y>"u++?/~޽j HRSUc)(b+b{+!+F".s:K3L0Ѷ CX.1)~+; |xoAz,PKnEz#ՒCrcnѣJ}0T(D{ hN K:x X#v09V,:2cMV< E;|9u z׭~2㬲eMʘ^ [-g}x7eP&|;rc:gTŌ6Kmdjkrpvfӆu;67Wf"Hem9Q?8u!mDg M]ȷt&R$I=[ˆ_?8Yٵٖ!@Gk8c(hrPRH,!]E#תzju\א >\I 6z+>62C'&T90;I\{'J} Kx_r vO_܇y>VEz7QvOqʜsӬ}j#b.鄴Xz%8@gAAEUT%>"C*zk!6$u=:s3,y<{^a&]$l [e, v4l2tyF|YȨILWN#:l&5 ްRhh`92֨8d1~Fvpҿ3/-pG̍F?w\kQͽ)?4R5j K8v7_]Ǯȑk͒{u4鵴Qhn,pɈʬ1|' I~ЮaN/꿟߂xGyx'fpȢ>n%y&ПO3i{3vɿ˹#H=/p;e9H"J hg[gh3Mt"4&8qUφo)>JܞPM(!T76e(ngM1]eR b"^]Fdbcb+y肁h(PxJnL``J8XTip;@m)!G t`6/,o/l*`^uGji&P޶(&y 9m`a/%~}ClrzSH}+ oG߉|~?77m?I0 o<&_21I@]=gL1С*4jtuE(IvJ_dswn܁`x+ " Z1NJt %kj[Yp|O;+jZ v毹4](pIdwTSO^T2z &DQv3Buzp$Yfdެ}bOY^:+Pk4E|t\X7k *ȍ X?,afokڪp>+Kh%'_Iz@99%@ 7WNyw&/΀WӬQvϡSz AMSin6 բ'.Su<rlpؠ4qF_Q D>V;P󉊙T#=lzBE:-NI.eMROF3F nF#ԎOBgE?tq:km%߲_w-w@>1 3#u_Yg,ƧQq{h5do^Ô2u>)HE-آV6LX ӕ]=bnCH dOכFyh+n3o-#(Wxy +V\ zv1L7\oڗ;77mzD5 C6l,{nؽ4!˲H7Wڹ3ka^f<".CS=G\;չ0!`')F:p[BN&W0}3{,ցpPXL>` vK'oqZc#XP}R֙x ,Ely]cȶxsAf*wN)l*>h2[mɢbpDts)V'e+ƒC&i}]xXb|Vd`"fmr +s+  ,) 5x 1p!؁yx678^WDј]&nF!KE)8HQˬ5z`Tjd-tZ(E\w8j\*}kFC3ԦX *"WQ7z5=k.s`Fd8A.+?4o?.1u8DfO %sOǞR{I_q{z3ATnE0aV-W.0 E&u?G>1҅^9m(T' +#,VH%>5Do0Zxe6rr/\OlOokDA2یّPau?P' pjԘ2}~X(cR/&r-&)^m}\j7`In6[A5ݴ10a$BBt֘(K*ed A 0II 86~0Q,I4]0 ܪHdF1-B 1@d`ՒD^xGo(] CJ9mh;Gzlj;Y>1#+yT}+j{S+yS9:q.?4u(Hה٤R Mm7ٌ5z V?9%LH%Hj &X>@ѻA֔?/KCU'6 vp*9ܔZ>%_V5rxsԇI#2OD|vv]#KGAԙ),}X'P;Ю&2Svz:~xDʦƎ_U|`XӘ$AÖ44녯XV%GJſ Es-Ѐ5ѽ 2ޕI:|mJ2ˍ슩VU6>A*]Sf{XB9$rkaLDQ &S@T.:ݔ0Zd|{|C>ʍiH>l2TD2=Fl@Q{wNZ jV+`͟eg)إˊ>ڴ弐^EL8)o/lc䎝Lk_ s9ZkUnaG;)]C"Ku:-]=zukݳ1hp3Nq63pyHeax!xx!'K]RWwRfjJ2bpLITM?Hbo3VԻå$xhOeљTΦ^X[_ i0x$pUQl|kW۳‹&tz;cton\٦3I VM N ;]!p6s(Eʼn&1I뻦sYE.b5N؜#KRzQ MxeYqՂ2`LC0Dܱrҭ`۠tpT8S<{B҆K I&Z8H$O8<@v[I'&,E>{UV$**p 04Иhu=zKbikUx@\ИÏ$UA<\q!<އ#QMV#]JFdۚc3e;DƝ-BDi~I뙛=,]gmCn.'hH !uYGBRCzk/ ڸ3E&\ x~#/ 1 y%kyxigS**;"% U"ی® 1gք@`/~BSyzSag?:Țy(1 VM/@OP9&B+6K7wbrSDhFsHm)]M&oٻj*۩7nb$Jo'VKܢ6 TI&ڦD!|s~( 撥AW"Gx#?х?~Sީ_^}}>S?޽cwx UÔ}5ݏv߯/?;'}"Bj@S{S$lq ~@8GA7U%(\+! a^Tiݪ\3M^}p/H-8 .\N{=?b%Y2ao ,OYDF'C,XC<ܨvĖ{pݐE2 jR|?*92Zb֕sRZljVktʭ4Rmc8y ]wV\nCKT +tjNAwD3I71 mo8fZ3O*k^42u d7l\("fv{Y;gnU\Ujc H uuy;'*R&UIsb#)SsMmܜhY1jXQ&gPֲo>|ȥ!5yLaAO!QJyCGjM^DplH ]yaWzѓ]6\Q bl;Lπe2S 2:=`~GoRZxb*N7V :L͡y֧t (ya瘎4Ø *ቋѭus-"Cy9bvHCXZǣduy,糏m+sJH9NČj7gߩ^/^xŋ?^CedarBackup2-2.26.5/testcase/data/mbox.conf.30000664000175000017500000000076012555052642022270 0ustar pronovicpronovic00000000000000 /home/joebob/mail/cedar-backup-users daily gzip /home/billiejoe/mail weekly bzip2 CedarBackup2-2.26.5/testcase/data/cback.conf.230000664000175000017500000000260012555052642022443 0ustar pronovicpronovic00000000000000 machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp machine4 remote someone scp -B ssh cback Y /aa machine5 remote N collect, purge /bb CedarBackup2-2.26.5/testcase/data/tree21.tar.gz0000664000175000017500000000674512555052642022555 0ustar pronovicpronovic00000000000000ENVa4s\O  w?;j[RhҦ<CN=nM6R^nJԕTEZUu˞8u8ϧi'?Gw"x.Oic_[?Wﱿ| omvy\j"z~77|ZrUw={Xx~8?n?n=о~a9 p߷|M4)ZϿ/:6p                                                                                                 OTa#!1LpnOwCD6..S{DۨRt})96W4U\=Mqq daily gzip /opt/public/svn/one BDB /opt/public/svn/two weekly /opt/public/svn/three bzip2 FSFS /opt/public/svn/four incr bzip2 CedarBackup2-2.26.5/testcase/data/tree9.tar.gz0000664000175000017500000000256212555052642022474 0ustar pronovicpronovic00000000000000^~ArJsS 64KP C "OdkL2{*85ݴVYWQpSP4M>MO#ķ;4I` .XCYy&U/2!IuQr \mq[>•8ɢs<]"~J=<rr1rn@ls.Li&ÍƗ,bkysRVr%zwxa335ALDlmLL5L8b]c.…I a#*IK2|! Xh;0M|Gcoh%)I*9Uפ&\d.)k*U&flgZe9;ZMMK~Ws*F??<ࢺNcFYز"!=&fWn WϒyrYEI^m6 }N]B6(zjM\ZQ`l "&Yh\٧~,.'xJtt6f*C{ax=?ߞw_=ǧ?o_?jP}5xZPI8&ӵ[1,#|Ds(X7H[Nqcb@ܑ 4nj[6Qd$x^cv"L .6(1UE XP F`zxp쬍(Yei34Ϗ @K>_Zؙu TϹ1MUKT{dq6b܏Fٛ(=lȶw—j{Qϕs9ҘC(õ=2MplcڵN)9יɮkmwٛ@O:ﲕyi4+!wu8]]+;?rXC @ @ |DQPCedarBackup2-2.26.5/testcase/data/tree15.tar.gz0000664000175000017500000000127412555052642022550 0ustar pronovicpronovic00000000000000ZEMo@q|~vf 妼~uv7ͦue I]:N0>!WhLGso459/cl/1y^(uZvϝvxoۙf;@\m|sL s`Dxs:XYq[퇀ҬRNkX8A6j;/b`7\mem(I.ĩɿ      D (CedarBackup2-2.26.5/testcase/data/split.conf.30000664000175000017500000000025212555052642022452 0ustar pronovicpronovic00000000000000 1.25 KB 0.6KB CedarBackup2-2.26.5/testcase/data/tree16.tar.gz0000664000175000017500000002141312555052642022546 0ustar pronovicpronovic00000000000000i2Eǖș{''#H;xAVwF:QS} 0F#r 0A`߮߮ ]PG@ ~+CK?vm?~_2N2\M~oC0o_c &:0@+s &x7j򼩪mZ~K(0E?ni5)>GZՙ#!@%SOǹ&ﷂS0fy^#gܳR;bL<<5{vf0755^\ o~sjʚv8o QuiT`$ಔVdx&*N(UBd_#ʕX ҅1+ -̤cl{a-ڱ\8g5RC Dݍŀ`IXp%4@x\m$nj}[ 7d[E^<@"[5C ˍŢ/|'G_iRFRtb4)"Lܡ; r^.E~2 ;)RMezpΔv#\m-QSkSL BhfTT#>G߳'3~_p'@ɞKժG/Ө)M=f6_c<,#y]lwgb XsQOgxI2/Xr|Ev;mHŚ9X…}n7@oʜȡi";񻰕*Xh7rhxIN%N:U?k{#%Db[!ߖ2"`65OvX&P 恣| rw-Qt|A,١MשrAج_}Μ0(Lܟ;=D_ 7 m1>qK:/*.f=fvDaz~hkl*4C)fd1Њ,+nptx\@1hhE1~ >k&*fm-s B`'ƔF.3+ M^XlOMGp!HB5f`YO(X}t;lȁrڹ*!xCIzFb#/wN?[9r.'_!ß܇O􇿮 Ƕ |5[r¬DY¬@r&PTN`O~[l4 ;{Oq/Q~PmR&%Y8TnyTW IE3*jadz+n!+#ul)4JK.mO'w"[c+%]tTc4窲 2 [w8ĻqpUӇprrs-_Y@O%и:Ӱ *m&BD,0H')q7g 5)ykعD''e11[Pv!{Ti*f2ɨrEq ,Xk>Za$B+ . ,I&fW'^p$8n=j᭷?|PwKm'}/o#Da_fGe64$l[_܏m ƇO@FֲPSBF&[etXD ZNp¹gk(GB]l jO,樭f@X/4-~m.0J7c@|zu:llܚ~-P\?1X>5$ gZ}fK=4Ob)1Ԃp$eA=R,@Z/*饲">Yf`o|`{܏x6]7!:4WC#r<: ŒU*z)Xژs"i5/ç[//{-4ƵuN l~Ё&tjcqs&s*6w3*IlBq( B5c$=ZG( SxZNLx&R9{Iф!{ uFFYNx;^煼8B[ˬOs:l]8M؇j=0"}i@.뮥dsNT:ep^H@V5  umU4K -qk/=Ȩ^ CY~S<݂ʇ3.A\Y 7wFͿf^n_-P[*@#UqX$(Fim'Ud%+ =[% \1y-})1zw?6Z#Je  F~E]ݙ-iW$UdhPse$(n'6i*>z>FDs{5k ؛KHr4̬aWOߥ,# aSˮCo q.gsnZa A5|T ܟ̚/wsx?W.wEX?|\ U?|w7??꿐o;_w[4oO@T=NVpH+y/qcTnTѯT_lrxuaA{a0b^ǸB3 QKQ[`C9iu`d=rB4->iVGUau%ř7x`qFDR3KCypQS^}oNb: #uԀO峝V^=2t.;-#y pvW7܄RD/8~;#CǦO yH΀dq^u"6~=qd&GyУ;I_m!~o{ﶁB/`}LHߓLIw7t ŀVӹfD `-l7-ИpLK1["+ʭǣzr(fQiC4bGsfGsF'6`NIvkEZ 2`BajvxU|-ѽU,[MihRXn&v.c"dL:vk]i8֗3C|w$./z,x͕im-љкmR`RQx!kIhMUd"ڃ8;_;ho _oP'v΢qA B*x[|5!<בPS1ye"GW!WZkO ^]+|*JNr^(1b^, ɚu| ~߷`[4(Zw[$f&^4-ڀFږHEa 8Bj#>Ϣһuޥ R: D EimZiNB,ṅ&7bxuBqRͳތ諉N°e,*xΦ`sQN+.cy%YG< L)vMH(Wp w! ^BP(-{n%1GKG1Ё 1»$JhԏHZ1u+#r5wCm]E:A뿟/g[H ӻ$lcWC|Ո1VVW~zˁz%{(d7pWJTьi 諣$AQ >rؾ<ꀬRunӪmV[+Ү`x)B)8c,ad?3Q :w'J Ӱy;Pz5nӏmv_ ޓej֚Q^Ak&rgU<O̞e](F eZ~=,PiU@]`4k[OߵxWN0\xXqwG&/Fub5u9+Em__[C[ cJԟyP!g&|ȉ[3\Lx'Sb42˄} w' @~EJs$v'm;M- > Dž\* c4";j.nV:||Ԝt[g}z (5(ʘyŒ#NrK4[a6Nszy*yȼ|v.J105X/{Q;j[OloqaV[Zܶ{5cCh7r6]'TJO}!x(?/KSX(?/q;0rڱ$3|xˈމ37SQod/OqI9c`djBS;z.mOAguWrIf1ѡI!YpaA9Sז]ݟ~ـܒUiN!;=9JPW'mZanh{e`?/K<,U 窞ȍIׄ2)JGDVN}iq6Ҳ'~+1食N4jÕsH_/I"mK5naB`Y"Yv)3u~<OĿgdPݾCedarBackup2-2.26.5/testcase/data/subversion.conf.60000664000175000017500000000051312555052642023521 0ustar pronovicpronovic00000000000000 /opt/public/svn/software daily gzip CedarBackup2-2.26.5/testcase/data/tree18.tar.gz0000664000175000017500000000171012555052642022546 0ustar pronovicpronovic00000000000000GEn0Fy`_t(m6o-eAw#ZKz}`N>n$:Ʀ &u܄ 6jֿ_a<Oߏ;{\{8* -B=ۛ`Ttƿc-p >?n\,BJW|k'Ypv_D'7 }Ǘ'Y&l_ q_2k&]% /&]k+O `w,p+; C Xiwxy>Mr1/!_?WB2ۿoI_F$}{j C˿w$?˿w$꿄!n_\)~o'P/$a/?l*% wK?%fV;I$꿄!n W`d/fY?/!?[`T7?$߄} 9W߹/aȿ[/aȿ[[_=- l_;' /aȿ_$X?rs|ܘ??/R/$a/?l*% wI?%fV=o ƭ?ƭĿwS^>/!?ecLOxKGBl? } %dVnV-a/`[%9=J~O'CedarBackup2-2.26.5/testcase/data/cback.conf.150000664000175000017500000001206712555052642022454 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. index example something.whatever example 102 bogus module something 350 tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l subversion mailx -S "hello" stage df -k /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 /opt/backup/staging machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp /opt/backup/staging cdrw-74 cdwriter /dev/cdrw 4 Y Y Y Y weekly 1.3 /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup2-2.26.5/testcase/data/encrypt.conf.20000664000175000017500000000027412555052642023006 0ustar pronovicpronovic00000000000000 gpg Backup User CedarBackup2-2.26.5/testcase/data/cback.conf.210000664000175000017500000001333012555052642022443 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. dependency example something.whatever example bogus module something a, b,c one tuesday /opt/backup/tmp backup group /usr/bin/scp -1 -B /usr/bin/ssh /usr/bin/cback collect, purge mkisofs /usr/bin/mkisofs svnlook /svnlook collect ls -l subversion mailx -S "hello" stage df -k machine1-1 local /opt/backup/collect machine1-2 local /var/backup machine2 remote /backup/collect all machine3 remote someone scp -B /home/whatever/tmp machine4 remote someone scp -B ssh cback Y /aa machine5 remote N collect, purge /bb /opt/backup/collect daily targz .cbignore /etc/cback.conf /etc/X11 .*tmp.* .*\.netscape\/.* /root /tmp 3 /ken 1 Y /var/log incr /etc incr tar .ignore /opt /opt/share large .*\.doc\.* backup .*\.xls\.* /opt/tmp /home/root/.profile /home/root/.kshrc weekly /home/root/.aliases daily tarbz2 /opt/backup/staging /opt/backup/staging dvd+rw dvdwriter /dev/cdrw 1 Y Y Y Y weekly 1.3 /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup2-2.26.5/testcase/data/postgresql.conf.20000664000175000017500000000036212555052642023523 0ustar pronovicpronovic00000000000000 user none Y CedarBackup2-2.26.5/testcase/data/postgresql.conf.50000664000175000017500000000046612555052642023533 0ustar pronovicpronovic00000000000000 bzip2 N database1 database2 CedarBackup2-2.26.5/testcase/data/split.conf.50000664000175000017500000000025312555052642022455 0ustar pronovicpronovic00000000000000 1.25 GB 0.6 GB CedarBackup2-2.26.5/testcase/data/cback.conf.10000664000175000017500000000400412555052642022357 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration tuesday /opt/backup/tmp backup backup /usr/bin/scp -1 -B /opt/backup/collect targz .cbignore /etc daily /var/log incr /opt weekly /opt/large /opt/backup /opt/tmp /opt/backup/staging machine1 local /opt/backup/collect machine2 remote backup /opt/backup/collect /opt/backup/staging /dev/cdrw 0,0,0 4 cdrw-74 Y /opt/backup/stage 5 /opt/backup/collect 0 CedarBackup2-2.26.5/testcase/data/cback.conf.130000664000175000017500000000041112555052642022440 0ustar pronovicpronovic00000000000000 /opt/backup/stage 5 CedarBackup2-2.26.5/testcase/data/amazons3.conf.20000664000175000017500000000052412555052642023053 0ustar pronovicpronovic00000000000000 Y mybucket encrypt 5368709120 2147483648 CedarBackup2-2.26.5/testcase/data/postgresql.conf.40000664000175000017500000000050512555052642023524 0ustar pronovicpronovic00000000000000 user bzip2 N database1 database2 CedarBackup2-2.26.5/testcase/data/capacity.conf.30000664000175000017500000000025212555052642023114 0ustar pronovicpronovic00000000000000 18 CedarBackup2-2.26.5/testcase/data/cback.conf.40000664000175000017500000000054212555052642022365 0ustar pronovicpronovic00000000000000 $Author: pronovic $ 1.3 Sample configuration Generated by hand. CedarBackup2-2.26.5/testcase/data/split.conf.10000664000175000017500000000007312555052642022451 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/tree2.ini0000664000175000017500000000041512555052642022032 0ustar pronovicpronovic00000000000000; Single-depth directory containing only other directories [names] dirprefix = dir fileprefix = file linkprefix = link [sizes] maxdepth = 1 mindirs = 1 maxdirs = 10 minfiles = 0 maxfiles = 0 minlinks = 0 maxlinks = 0 minsize = 0 maxsize = 500 CedarBackup2-2.26.5/testcase/data/tree5.tar.gz0000664000175000017500000003245312555052642022472 0ustar pronovicpronovic00000000000000AɖVE=+D; @@ߪ(+UJ%{ b6].t 0A`__ _ B A 㿀!/be}ï]M4FM}7%0+Q mvOtqAQyA5g} 󟟛qO#ק|?ZKfyzwP=ZFm 5ap"#˶$:DJ:xS,*aQyF\NoUjNKf!M>b NVꄊ/wKA g03ALC6w ?!3X|4Evrm#XN1 bU&{Vդ= nl^aR@8A804tusz['Põh聛ws;UZJšxG@f:Y# .b01R4f3vBڃ/a@ʓ(伂88.'B^fL`N74.Xm (tB6< ZoQlTWM|`2gwU?#G?56iyK1Fg"}M7q/] pTK\On+%1<2Ttl@P. PI ׼Ԥb.55ZFٰNIu,Kx@k0D9ՠxy/D+]I8HS1[G>{1g 74So1k1x|xW ϗ v?_{~m_?oCoW_OWE_{)<y_£E_)<y _?£OvO}|}3`j2GWCIuP%\54e^+@huTq_U۽"D9W|!Xg7}D3 +YIVi`x1P=ED6Wȍ աX]rA[77>_x2* /.Pʰ!S#\csU T9S!(!!G*O/ pχ(`K`y]NѵڂuT5$/%߹;3] \SnlYhiubPvbepޡYQdHYv-luT6xy9jŷbbO%d7 9'I[ 0'6/IQ_B3[*ڦ#Q/S gY }l`Ȝ$D$H!P!c4_c)PiqH{lҒb`> VW23V: `mLH#a̫$vgqO3_+3٣z&S!E:ѕS2KecdIddJt,Rqd52x'QsmθZ1sln D-4p0z8meL12\x:,W?>?ط>Aݢ矗.?AoUBoaɱ Q;i]&YIKpYRPїux~pap 4VQSPb a "x{" |ʄLYui%Er [no^Ylhg!Ngrŏ1It\iS'[-G:Ȉ}#XtE#S[# 8IKivDRU R7&-AS|":;JZ9??O7)p5Dd+G çww@[;w_}sx"O>w_ݽ΢C4qa6Z;ÊPRFN۝e=;*֮yH%Wxϧ7Kz=w%]ORnԷ%D"a\snT PYӘ20fB[ uã蹏vOfT7[ #}A F^-IrF v,{5)(hB˃lEWBUF"DЪ'i\(+ֲzIa)̭qpq`2gw؎FQx*~9F7pCn%{vraUoo(4f󑌞q*{pNkK=_G ?`{H3$k:OUN9ln橥#QMr BZm_?nޝvmQphfԿ2a~- hx]ά9n7~?YDtؑa/isZZ#EiRUNyog__oq?sS#KV`)E>P:Ӎy ,&13k±䘦PDy: D3i\㳈#+BUe̚m(kp eXwBHy7V Yq> L<8q#nήJͶ= ɾvW:]L:K-x@8۔$t'Bul$lT[i%!U=Mi`K~æ߲K@K `4ڄt HXѯ+-! ې ]h+6F#'@T'C'ޯ[yM6Zb0M z}qg/zpek$[VLD%z+Ty꒺t=5/S6Tt x8ol`'O9)M̂<޲z@"5#G$5yX=6geA!6N<H7v=+ O3NOڏ&np_ '}/}_>fGMϐmB1Ф&{˨$GWU23m_xa AnݯeXnb~C;hhZf#qQ<[C& |_}ݺk"jtDa9Nj4 Ϻdf 6pՙv aʕjܬ;d aRVt#@V"78n|">NI~ $*|/K3; 6Il!x^ NJ-9%/)Gs4a9ƤLJFEj\k)06#,bX8-^y %GH\&S⾤Pͭu=_Njr2ٲrqaGg?gÏ? yzgRv 9CaY7?]F!7~D让)8YѶu^f] t}|x"v'J(|׋ {a}?f 8kR!ɎŨFCBUR"4rKonVrqpkAo( 1:Bŗ`7s(h=U=ee}1(] qtR^sif]?ֿ?BocU*,wqhvdCoX ƍYs*CkQxE&.Y ]ax{UP7ɼŇ ̂3뉡͍yG"` D’ˁMq8"qd]25(kfv: LK,k?7 Y؛kɍj{2X u2]O8b8NN,kַC_CaC^R{7:|s+e 5J4I[$~=Ϲq9K n˃魳tmχ, 6iK$si_4%E|q;Z8! XйVsƽQWMoA뎕ulkդ.]Xwu$R6ӮdF:bųxB|]-oyJM&|aoUD&toG]B jo"Oʦ8].ON$FvgLm(Ay 0D gE|yR= -gҷ6tQmF5 \v:!ީ[[p%JUc„1cEY>hb3M}ڻ"R׷h-dlAt/<{9֚'o=pP5b q\ ⵅiDUܟ^QY PZ㪄ߺnR/H/ #R^N>yan|ՌĹUE sU⍎?qB?=";I"+ǩ=^;Dy\'q,SJs6q'Cc fQ.5:qqiqӉQ0IO\cʸqP6&GI >M/ v[H&~;i]pU 5R^FEl܏$s~V/0g~F]GlA8MlKlV6b)Њ jh?dz^]̍2t>~vjc{;Rk[XPvj$HZ6ӧ _qzUbp+pԙE[B rA*V}X_'6;Q>3p PtD̾!sٕs\,,EyWoyyJ#e]¤},7Hcnݱ' "Rp`CąllxgW,g}aBBjR!3Te mI] LNa n];&L~oyx4D`Sј$+v_cXKO[jU{Eh\~rҫHy<0I>?Gt)$E,$sNj4UlL殨w[.Xm5cH驠K~,̕g4H4"0FM]JoW8Dlb ~O*ܶ[M/4ػIL. s]\>2Vn ًQSbJqAlёIiWw"U '!L'`p$_lR XO+à^y+NOm*jM&Pb*3Q$Y&ݩI}z|օt@/?,^aQ|:)A- ȭќOض3Urbֿ~KdV16lBavJzu[ YrLOEV+xcP3yq+?Ƭ H8dl󡲛[A; iQo4+#Z .;S Ax>G/_?msG _s!!S#xC??»??N] CG?Cxן3}_>C_Eozذx?S#]Y"T *\ 1yZ.dVjP^MQ2|w.y;%Yv-CM'BlA4*@ˍ޳mddTg''Koc>X%voqQ^^8*3) gޕ[?2hX5BiNO"irnfu4㩡=R4Gr14=isįsï6jppV~O hoyOU2D1}L L*y?/|voV! ;h[{0ABKYffh=JAroOw3//äδ]!|h@`s ڠ;vI*9n_LIkE I.zW>ݢa݅*5?(^;o++He1F@H|Q {]V yL\;G02)~Y>ȌD;"8X{FN @C%ĝY eᕫ["oy> Ha ixjI\ CNҰAD(f\@#:)_EҾ MU+A%2RIx?kE  Xʵ$t4#ڦ&DnB4oZz_vvaZ#&!M2K}tβd(7ng[8Ȋh*8yOunvX~Uo߇P/G2c&n%8ZTll``#DF~)@mV#Rwf\gސ*0ْ2<`FsÜHK 'OYObFL'/G-_+|y}"92@DAumoW! w0k]ܦ'$NeD_mu,U3\o([շR#5ec R'uU \\n'ج-N$)npVgS)P*C3"@^UGQ 4 c("N vMzWf iJH8H}>?"B| c|=iPĒ#Y`.u:tdq _x<_hlrO֣n%pT-3M%DT,L"L RԮNО.͐g t4]۵RĶӣeuNKVbj*kgr$+WzI@Z R9Px*B ?G_8[GOr?;_N»;_!!3uH»;_!|W3񫐉B 55JY̥֮4\Lqcɒv;i:IuvlQhQTgxQH[]S/bo s?Ob!|gY=w?]CxCY=w?!CY=+iE8F LHUW)[眪[K^Mc ƌ/DK8faYKI5HG S[;e;d&+zKRP|X^A QeQG5_B+[sAR&w8cKW`/`]}R!#Sb$tҳZ dtna  S U ƲrEffiͪ2l]㙉Boa@jynӸ=mމr}gf~iMVn`S 5on,{U^8լmA`g YȷvXFaB.`7l5}\aV#m퍐W."]:1[38/q.00VUU8M r4T&{/צrHXZtK`c,_5GgC_Sg3Ktv翀0_s1m|@#1KjQy-:w)IҢDrVP&;P5r"`lmէښP 4s^˺R]KDՠXLlwb|+d $ԧ; Rx4^ |λf}_ wAf_ ˼3afS`d˚Ќ-ѮY>^,3)>&q@~\ EE)"Pfc$,: U`6q J|( Hg zXTa=+zYuQNʅ#)DkO-Z1cYۙmp[ky@-…ƪQN^2|,D`8&%Uk5^XAtF듯-sT";u> !rVGzl?ǒZ< A=unGVeљ:]|i6GtcdWULUwU 2yG+MuBmLD84P%FozI@>O 5^4"H,VB8g!IO+7vςsCl\pSbKQb1v |Rz?_?ϪTrIRa:\gL-q?o|kδqT{ѱ:-]=sk.QB̞ގc쇶,2g(ԍ1Ŧt,g.QICrZUXO1C4X :/#98<'SB AQZḱX%K.uC `-d?4F!һA.u7N^Rr;hQq,scxOn~2&>BP@{k$0;ag7H] f-~rᛓȀK<̓o%>2q}bΪmt_U=Cuh]sɂ l}~WG {?_mH??"Gc^O'9>JGSx;꿞[HQEVQw=cEV/wPTWno)NϘn'G$LWk%tb f2DFvpaACύVUfUNZFc*-#~@%w3KV偠[nK xx)k?x0A۰a/"TüQJ#xbf^ʾ\#1Yj(&gZ s K@]=zsbT-r0k~eݐ /opt/backup/stage 5 /opt/backup/collect 0 /home/backup/tmp 12 CedarBackup2-2.26.5/testcase/data/cback.conf.30000664000175000017500000000024612555052642022365 0ustar pronovicpronovic00000000000000 CedarBackup2-2.26.5/testcase/data/tree10.tar.gz0000664000175000017500000000060712555052642022542 0ustar pronovicpronovic00000000000000An@`>oй2V4Dmw Ehb.Ʀ|aA`8JJn BDޭXpԭ3_{F aQʅ^ kU/i\jy>YmyCQ/*oSB3mZ0I mW=ȿ?QU[hMOu%Č  }Nzò 9Ø}1`^W? w7q#y[iRecC9J|.J %bj۔yr)Ѝ,T;W0uJaI-M~&RfX|IL sݱϥLA5bc7.`Ks8iSGWSP(CedarBackup2-2.26.5/testcase/subversiontests.py0000664000175000017500000031560612560016766023251 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005-2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests Subversion extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/subversion.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/subversion.py. There are also tests for several of the private methods. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to Subversion, since the actual backup would need to have access to real Subversion repositories. Because of this, there aren't any tests below that actually back up repositories. As a compromise, I test some of the private methods in the implementation. Normally, I don't like to test private methods, but in this case, testing the private methods will help give us some reasonable confidence in the code even if we can't talk to Subversion successfully. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a SUBVERSIONTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.subversion import LocalConfig, SubversionConfig from CedarBackup2.extend.subversion import Repository, RepositoryDir, BDBRepository, FSFSRepository ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "subversion.conf.1", "subversion.conf.2", "subversion.conf.3", "subversion.conf.4", "subversion.conf.5", "subversion.conf.6", "subversion.conf.7", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestBDBRepository class ########################## class TestBDBRepository(unittest.TestCase): """ Tests for the BDBRepository class. @note: This class is deprecated. These tests are kept around to make sure that we don't accidentally break the interface. """ ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = BDBRepository() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repository = BDBRepository() self.failUnlessEqual("BDB", repository.repositoryType) self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessEqual(None, repository.collectMode) self.failUnlessEqual(None, repository.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ repository = BDBRepository("/path/to/it", "daily", "gzip") self.failUnlessEqual("BDB", repository.repositoryType) self.failUnlessEqual("/path/to/it", repository.repositoryPath) self.failUnlessEqual("daily", repository.collectMode) self.failUnlessEqual("gzip", repository.compressMode) # Removed testConstructor_003 after BDBRepository was deprecated def testConstructor_004(self): """ Test assignment of repositoryPath attribute, None value. """ repository = BDBRepository(repositoryPath="/path/to/something") self.failUnlessEqual("/path/to/something", repository.repositoryPath) repository.repositoryPath = None self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_005(self): """ Test assignment of repositoryPath attribute, valid value. """ repository = BDBRepository() self.failUnlessEqual(None, repository.repositoryPath) repository.repositoryPath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", repository.repositoryPath) def testConstructor_006(self): """ Test assignment of repositoryPath attribute, invalid value (empty). """ repository = BDBRepository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_007(self): """ Test assignment of repositoryPath attribute, invalid value (not absolute). """ repository = BDBRepository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "relative/path") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_008(self): """ Test assignment of collectMode attribute, None value. """ repository = BDBRepository(collectMode="daily") self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = None self.failUnlessEqual(None, repository.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, valid value. """ repository = BDBRepository() self.failUnlessEqual(None, repository.collectMode) repository.collectMode = "daily" self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = "weekly" self.failUnlessEqual("weekly", repository.collectMode) repository.collectMode = "incr" self.failUnlessEqual("incr", repository.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repository = BDBRepository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "") self.failUnlessEqual(None, repository.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repository = BDBRepository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "monthly") self.failUnlessEqual(None, repository.collectMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, None value. """ repository = BDBRepository(compressMode="gzip") self.failUnlessEqual("gzip", repository.compressMode) repository.compressMode = None self.failUnlessEqual(None, repository.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, valid value. """ repository = BDBRepository() self.failUnlessEqual(None, repository.compressMode) repository.compressMode = "none" self.failUnlessEqual("none", repository.compressMode) repository.compressMode = "bzip2" self.failUnlessEqual("bzip2", repository.compressMode) repository.compressMode = "gzip" self.failUnlessEqual("gzip", repository.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repository = BDBRepository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "") self.failUnlessEqual(None, repository.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repository = BDBRepository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "compress") self.failUnlessEqual(None, repository.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repository1 = BDBRepository() repository2 = BDBRepository() self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repository1 = BDBRepository("/path", "daily", "gzip") repository2 = BDBRepository("/path", "daily", "gzip") self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryPath differs (one None). """ repository1 = BDBRepository() repository2 = BDBRepository(repositoryPath="/zippy") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryPath differs. """ repository1 = BDBRepository("/path", "daily", "gzip") repository2 = BDBRepository("/zippy", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repository1 = BDBRepository() repository2 = BDBRepository(collectMode="incr") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ repository1 = BDBRepository("/path", "daily", "gzip") repository2 = BDBRepository("/path", "incr", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repository1 = BDBRepository() repository2 = BDBRepository(compressMode="gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ repository1 = BDBRepository("/path", "daily", "bzip2") repository2 = BDBRepository("/path", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) ########################### # TestFSFSRepository class ########################### class TestFSFSRepository(unittest.TestCase): """ Tests for the FSFSRepository class. @note: This class is deprecated. These tests are kept around to make sure that we don't accidentally break the interface. """ ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = FSFSRepository() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repository = FSFSRepository() self.failUnlessEqual("FSFS", repository.repositoryType) self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessEqual(None, repository.collectMode) self.failUnlessEqual(None, repository.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ repository = FSFSRepository("/path/to/it", "daily", "gzip") self.failUnlessEqual("FSFS", repository.repositoryType) self.failUnlessEqual("/path/to/it", repository.repositoryPath) self.failUnlessEqual("daily", repository.collectMode) self.failUnlessEqual("gzip", repository.compressMode) # Removed testConstructor_003 after FSFSRepository was deprecated def testConstructor_004(self): """ Test assignment of repositoryPath attribute, None value. """ repository = FSFSRepository(repositoryPath="/path/to/something") self.failUnlessEqual("/path/to/something", repository.repositoryPath) repository.repositoryPath = None self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_005(self): """ Test assignment of repositoryPath attribute, valid value. """ repository = FSFSRepository() self.failUnlessEqual(None, repository.repositoryPath) repository.repositoryPath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", repository.repositoryPath) def testConstructor_006(self): """ Test assignment of repositoryPath attribute, invalid value (empty). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_007(self): """ Test assignment of repositoryPath attribute, invalid value (not absolute). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "relative/path") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_008(self): """ Test assignment of collectMode attribute, None value. """ repository = FSFSRepository(collectMode="daily") self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = None self.failUnlessEqual(None, repository.collectMode) def testConstructor_009(self): """ Test assignment of collectMode attribute, valid value. """ repository = FSFSRepository() self.failUnlessEqual(None, repository.collectMode) repository.collectMode = "daily" self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = "weekly" self.failUnlessEqual("weekly", repository.collectMode) repository.collectMode = "incr" self.failUnlessEqual("incr", repository.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "") self.failUnlessEqual(None, repository.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "monthly") self.failUnlessEqual(None, repository.collectMode) def testConstructor_012(self): """ Test assignment of compressMode attribute, None value. """ repository = FSFSRepository(compressMode="gzip") self.failUnlessEqual("gzip", repository.compressMode) repository.compressMode = None self.failUnlessEqual(None, repository.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, valid value. """ repository = FSFSRepository() self.failUnlessEqual(None, repository.compressMode) repository.compressMode = "none" self.failUnlessEqual("none", repository.compressMode) repository.compressMode = "bzip2" self.failUnlessEqual("bzip2", repository.compressMode) repository.compressMode = "gzip" self.failUnlessEqual("gzip", repository.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "") self.failUnlessEqual(None, repository.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repository = FSFSRepository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "compress") self.failUnlessEqual(None, repository.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repository1 = FSFSRepository() repository2 = FSFSRepository() self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repository1 = FSFSRepository("/path", "daily", "gzip") repository2 = FSFSRepository("/path", "daily", "gzip") self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryPath differs (one None). """ repository1 = FSFSRepository() repository2 = FSFSRepository(repositoryPath="/zippy") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryPath differs. """ repository1 = FSFSRepository("/path", "daily", "gzip") repository2 = FSFSRepository("/zippy", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repository1 = FSFSRepository() repository2 = FSFSRepository(collectMode="incr") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ repository1 = FSFSRepository("/path", "daily", "gzip") repository2 = FSFSRepository("/path", "incr", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repository1 = FSFSRepository() repository2 = FSFSRepository(compressMode="gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ repository1 = FSFSRepository("/path", "daily", "bzip2") repository2 = FSFSRepository("/path", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) ####################### # TestRepository class ####################### class TestRepository(unittest.TestCase): """Tests for the Repository class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = Repository() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repository = Repository() self.failUnlessEqual(None, repository.repositoryType) self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessEqual(None, repository.collectMode) self.failUnlessEqual(None, repository.compressMode) def testConstructor_002(self): """ Test constructor with all values filled in. """ repository = Repository("type", "/path/to/it", "daily", "gzip") self.failUnlessEqual("type", repository.repositoryType) self.failUnlessEqual("/path/to/it", repository.repositoryPath) self.failUnlessEqual("daily", repository.collectMode) self.failUnlessEqual("gzip", repository.compressMode) def testConstructor_003(self): """ Test assignment of repositoryType attribute, None value. """ repository = Repository(repositoryType="type") self.failUnlessEqual("type", repository.repositoryType) repository.repositoryType = None self.failUnlessEqual(None, repository.repositoryType) def testConstructor_004(self): """ Test assignment of repositoryType attribute, non-None value. """ repository = Repository() self.failUnlessEqual(None, repository.repositoryType) repository.repositoryType = "" self.failUnlessEqual("", repository.repositoryType) repository.repositoryType = "test" self.failUnlessEqual("test", repository.repositoryType) def testConstructor_005(self): """ Test assignment of repositoryPath attribute, None value. """ repository = Repository(repositoryPath="/path/to/something") self.failUnlessEqual("/path/to/something", repository.repositoryPath) repository.repositoryPath = None self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_006(self): """ Test assignment of repositoryPath attribute, valid value. """ repository = Repository() self.failUnlessEqual(None, repository.repositoryPath) repository.repositoryPath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", repository.repositoryPath) def testConstructor_007(self): """ Test assignment of repositoryPath attribute, invalid value (empty). """ repository = Repository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_008(self): """ Test assignment of repositoryPath attribute, invalid value (not absolute). """ repository = Repository() self.failUnlessEqual(None, repository.repositoryPath) self.failUnlessAssignRaises(ValueError, repository, "repositoryPath", "relative/path") self.failUnlessEqual(None, repository.repositoryPath) def testConstructor_009(self): """ Test assignment of collectMode attribute, None value. """ repository = Repository(collectMode="daily") self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = None self.failUnlessEqual(None, repository.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, valid value. """ repository = Repository() self.failUnlessEqual(None, repository.collectMode) repository.collectMode = "daily" self.failUnlessEqual("daily", repository.collectMode) repository.collectMode = "weekly" self.failUnlessEqual("weekly", repository.collectMode) repository.collectMode = "incr" self.failUnlessEqual("incr", repository.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repository = Repository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "") self.failUnlessEqual(None, repository.collectMode) def testConstructor_012(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repository = Repository() self.failUnlessEqual(None, repository.collectMode) self.failUnlessAssignRaises(ValueError, repository, "collectMode", "monthly") self.failUnlessEqual(None, repository.collectMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, None value. """ repository = Repository(compressMode="gzip") self.failUnlessEqual("gzip", repository.compressMode) repository.compressMode = None self.failUnlessEqual(None, repository.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, valid value. """ repository = Repository() self.failUnlessEqual(None, repository.compressMode) repository.compressMode = "none" self.failUnlessEqual("none", repository.compressMode) repository.compressMode = "bzip2" self.failUnlessEqual("bzip2", repository.compressMode) repository.compressMode = "gzip" self.failUnlessEqual("gzip", repository.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repository = Repository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "") self.failUnlessEqual(None, repository.compressMode) def testConstructor_016(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repository = Repository() self.failUnlessEqual(None, repository.compressMode) self.failUnlessAssignRaises(ValueError, repository, "compressMode", "compress") self.failUnlessEqual(None, repository.compressMode) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repository1 = Repository() repository2 = Repository() self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repository1 = Repository("type", "/path", "daily", "gzip") repository2 = Repository("type", "/path", "daily", "gzip") self.failUnlessEqual(repository1, repository2) self.failUnless(repository1 == repository2) self.failUnless(not repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(repository1 >= repository2) self.failUnless(not repository1 != repository2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryType differs (one None). """ repository1 = Repository() repository2 = Repository(repositoryType="type") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryType differs. """ repository1 = Repository("other", "/path", "daily", "gzip") repository2 = Repository("type", "/path", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_004a(self): """ Test comparison of two differing objects, repositoryPath differs (one None). """ repository1 = Repository() repository2 = Repository(repositoryPath="/zippy") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_005(self): """ Test comparison of two differing objects, repositoryPath differs. """ repository1 = Repository("type", "/path", "daily", "gzip") repository2 = Repository("type", "/zippy", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repository1 = Repository() repository2 = Repository(collectMode="incr") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_007(self): """ Test comparison of two differing objects, collectMode differs. """ repository1 = Repository("type", "/path", "daily", "gzip") repository2 = Repository("type", "/path", "incr", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repository1 = Repository() repository2 = Repository(compressMode="gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) def testComparison_009(self): """ Test comparison of two differing objects, compressMode differs. """ repository1 = Repository("type", "/path", "daily", "bzip2") repository2 = Repository("type", "/path", "daily", "gzip") self.failIfEqual(repository1, repository2) self.failUnless(not repository1 == repository2) self.failUnless(repository1 < repository2) self.failUnless(repository1 <= repository2) self.failUnless(not repository1 > repository2) self.failUnless(not repository1 >= repository2) self.failUnless(repository1 != repository2) ########################## # TestRepositoryDir class ########################## class TestRepositoryDir(unittest.TestCase): """Tests for the RepositoryDir class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = RepositoryDir() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.repositoryType) self.failUnlessEqual(None, repositoryDir.directoryPath) self.failUnlessEqual(None, repositoryDir.collectMode) self.failUnlessEqual(None, repositoryDir.compressMode) self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) self.failUnlessEqual(None, repositoryDir.excludePatterns) def testConstructor_002(self): """ Test constructor with all values filled in. """ repositoryDir = RepositoryDir("type", "/path/to/it", "daily", "gzip", [ "whatever", ], [ ".*software.*", ]) self.failUnlessEqual("type", repositoryDir.repositoryType) self.failUnlessEqual("/path/to/it", repositoryDir.directoryPath) self.failUnlessEqual("daily", repositoryDir.collectMode) self.failUnlessEqual("gzip", repositoryDir.compressMode) self.failUnlessEqual([ "whatever", ], repositoryDir.relativeExcludePaths) self.failUnlessEqual([ ".*software.*", ], repositoryDir.excludePatterns) def testConstructor_003(self): """ Test assignment of repositoryType attribute, None value. """ repositoryDir = RepositoryDir(repositoryType="type") self.failUnlessEqual("type", repositoryDir.repositoryType) repositoryDir.repositoryType = None self.failUnlessEqual(None, repositoryDir.repositoryType) def testConstructor_004(self): """ Test assignment of repositoryType attribute, non-None value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.repositoryType) repositoryDir.repositoryType = "" self.failUnlessEqual("", repositoryDir.repositoryType) repositoryDir.repositoryType = "test" self.failUnlessEqual("test", repositoryDir.repositoryType) def testConstructor_005(self): """ Test assignment of directoryPath attribute, None value. """ repositoryDir = RepositoryDir(directoryPath="/path/to/something") self.failUnlessEqual("/path/to/something", repositoryDir.directoryPath) repositoryDir.directoryPath = None self.failUnlessEqual(None, repositoryDir.directoryPath) def testConstructor_006(self): """ Test assignment of directoryPath attribute, valid value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.directoryPath) repositoryDir.directoryPath = "/path/to/whatever" self.failUnlessEqual("/path/to/whatever", repositoryDir.directoryPath) def testConstructor_007(self): """ Test assignment of directoryPath attribute, invalid value (empty). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.directoryPath) self.failUnlessAssignRaises(ValueError, repositoryDir, "directoryPath", "") self.failUnlessEqual(None, repositoryDir.directoryPath) def testConstructor_008(self): """ Test assignment of directoryPath attribute, invalid value (not absolute). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.directoryPath) self.failUnlessAssignRaises(ValueError, repositoryDir, "directoryPath", "relative/path") self.failUnlessEqual(None, repositoryDir.directoryPath) def testConstructor_009(self): """ Test assignment of collectMode attribute, None value. """ repositoryDir = RepositoryDir(collectMode="daily") self.failUnlessEqual("daily", repositoryDir.collectMode) repositoryDir.collectMode = None self.failUnlessEqual(None, repositoryDir.collectMode) def testConstructor_010(self): """ Test assignment of collectMode attribute, valid value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.collectMode) repositoryDir.collectMode = "daily" self.failUnlessEqual("daily", repositoryDir.collectMode) repositoryDir.collectMode = "weekly" self.failUnlessEqual("weekly", repositoryDir.collectMode) repositoryDir.collectMode = "incr" self.failUnlessEqual("incr", repositoryDir.collectMode) def testConstructor_011(self): """ Test assignment of collectMode attribute, invalid value (empty). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.collectMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "collectMode", "") self.failUnlessEqual(None, repositoryDir.collectMode) def testConstructor_012(self): """ Test assignment of collectMode attribute, invalid value (not in list). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.collectMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "collectMode", "monthly") self.failUnlessEqual(None, repositoryDir.collectMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, None value. """ repositoryDir = RepositoryDir(compressMode="gzip") self.failUnlessEqual("gzip", repositoryDir.compressMode) repositoryDir.compressMode = None self.failUnlessEqual(None, repositoryDir.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, valid value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.compressMode) repositoryDir.compressMode = "none" self.failUnlessEqual("none", repositoryDir.compressMode) repositoryDir.compressMode = "bzip2" self.failUnlessEqual("bzip2", repositoryDir.compressMode) repositoryDir.compressMode = "gzip" self.failUnlessEqual("gzip", repositoryDir.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (empty). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.compressMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "compressMode", "") self.failUnlessEqual(None, repositoryDir.compressMode) def testConstructor_016(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.compressMode) self.failUnlessAssignRaises(ValueError, repositoryDir, "compressMode", "compress") self.failUnlessEqual(None, repositoryDir.compressMode) def testConstructor_017(self): """ Test assignment of relativeExcludePaths attribute, None value. """ repositoryDir = RepositoryDir(relativeExcludePaths=[]) self.failUnlessEqual([], repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = None self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) def testConstructor_018(self): """ Test assignment of relativeExcludePaths attribute, [] value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = [] self.failUnlessEqual([], repositoryDir.relativeExcludePaths) def testConstructor_019(self): """ Test assignment of relativeExcludePaths attribute, single valid entry. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = ["stuff", ] self.failUnlessEqual(["stuff", ], repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths.insert(0, "bogus") self.failUnlessEqual(["bogus", "stuff", ], repositoryDir.relativeExcludePaths) def testConstructor_020(self): """ Test assignment of relativeExcludePaths attribute, multiple valid entries. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths = ["bogus", "stuff", ] self.failUnlessEqual(["bogus", "stuff", ], repositoryDir.relativeExcludePaths) repositoryDir.relativeExcludePaths.append("more") self.failUnlessEqual(["bogus", "stuff", "more", ], repositoryDir.relativeExcludePaths) def testConstructor_021(self): """ Test assignment of excludePatterns attribute, None value. """ repositoryDir = RepositoryDir(excludePatterns=[]) self.failUnlessEqual([], repositoryDir.excludePatterns) repositoryDir.excludePatterns = None self.failUnlessEqual(None, repositoryDir.excludePatterns) def testConstructor_022(self): """ Test assignment of excludePatterns attribute, [] value. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) repositoryDir.excludePatterns = [] self.failUnlessEqual([], repositoryDir.excludePatterns) def testConstructor_023(self): """ Test assignment of excludePatterns attribute, single valid entry. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) repositoryDir.excludePatterns = ["valid", ] self.failUnlessEqual(["valid", ], repositoryDir.excludePatterns) repositoryDir.excludePatterns.append("more") self.failUnlessEqual(["valid", "more", ], repositoryDir.excludePatterns) def testConstructor_024(self): """ Test assignment of excludePatterns attribute, multiple valid entries. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) repositoryDir.excludePatterns = ["valid", "more", ] self.failUnlessEqual(["valid", "more", ], repositoryDir.excludePatterns) repositoryDir.excludePatterns.insert(1, "bogus") self.failUnlessEqual(["valid", "bogus", "more", ], repositoryDir.excludePatterns) def testConstructor_025(self): """ Test assignment of excludePatterns attribute, single invalid entry. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) self.failUnlessAssignRaises(ValueError, repositoryDir, "excludePatterns", ["*.jpg", ]) self.failUnlessEqual(None, repositoryDir.excludePatterns) def testConstructor_026(self): """ Test assignment of excludePatterns attribute, multiple invalid entries. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) self.failUnlessAssignRaises(ValueError, repositoryDir, "excludePatterns", ["*.jpg", "*" ]) self.failUnlessEqual(None, repositoryDir.excludePatterns) def testConstructor_027(self): """ Test assignment of excludePatterns attribute, mixed valid and invalid entries. """ repositoryDir = RepositoryDir() self.failUnlessEqual(None, repositoryDir.excludePatterns) self.failUnlessAssignRaises(ValueError, repositoryDir, "excludePatterns", ["*.jpg", "valid" ]) self.failUnlessEqual(None, repositoryDir.excludePatterns) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir() self.failUnlessEqual(repositoryDir1, repositoryDir2) self.failUnless(repositoryDir1 == repositoryDir2) self.failUnless(not repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(repositoryDir1 >= repositoryDir2) self.failUnless(not repositoryDir1 != repositoryDir2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/path", "daily", "gzip") self.failUnlessEqual(repositoryDir1, repositoryDir2) self.failUnless(repositoryDir1 == repositoryDir2) self.failUnless(not repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(repositoryDir1 >= repositoryDir2) self.failUnless(not repositoryDir1 != repositoryDir2) def testComparison_003(self): """ Test comparison of two differing objects, repositoryType differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(repositoryType="type") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_004(self): """ Test comparison of two differing objects, repositoryType differs. """ repositoryDir1 = RepositoryDir("other", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/path", "daily", "gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_004a(self): """ Test comparison of two differing objects, directoryPath differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(directoryPath="/zippy") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_005(self): """ Test comparison of two differing objects, directoryPath differs. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/zippy", "daily", "gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(collectMode="incr") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_007(self): """ Test comparison of two differing objects, collectMode differs. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "gzip") repositoryDir2 = RepositoryDir("type", "/path", "incr", "gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs (one None). """ repositoryDir1 = RepositoryDir() repositoryDir2 = RepositoryDir(compressMode="gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) def testComparison_009(self): """ Test comparison of two differing objects, compressMode differs. """ repositoryDir1 = RepositoryDir("type", "/path", "daily", "bzip2") repositoryDir2 = RepositoryDir("type", "/path", "daily", "gzip") self.failIfEqual(repositoryDir1, repositoryDir2) self.failUnless(not repositoryDir1 == repositoryDir2) self.failUnless(repositoryDir1 < repositoryDir2) self.failUnless(repositoryDir1 <= repositoryDir2) self.failUnless(not repositoryDir1 > repositoryDir2) self.failUnless(not repositoryDir1 >= repositoryDir2) self.failUnless(repositoryDir1 != repositoryDir2) ############################# # TestSubversionConfig class ############################# class TestSubversionConfig(unittest.TestCase): """Tests for the SubversionConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = SubversionConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.collectMode) self.failUnlessEqual(None, subversion.compressMode) self.failUnlessEqual(None, subversion.repositories) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, repositories=None. """ subversion = SubversionConfig("daily", "gzip", None) self.failUnlessEqual("daily", subversion.collectMode) self.failUnlessEqual("gzip", subversion.compressMode) self.failUnlessEqual(None, subversion.repositories) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no repositories. """ subversion = SubversionConfig("daily", "gzip", []) self.failUnlessEqual("daily", subversion.collectMode) self.failUnlessEqual("gzip", subversion.compressMode) self.failUnlessEqual([], subversion.repositories) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one repository. """ repositories = [ Repository(), ] subversion = SubversionConfig("daily", "gzip", repositories) self.failUnlessEqual("daily", subversion.collectMode) self.failUnlessEqual("gzip", subversion.compressMode) self.failUnlessEqual(repositories, subversion.repositories) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with multiple repositories. """ repositories = [ Repository(collectMode="daily"), Repository(collectMode="weekly"), ] subversion = SubversionConfig("daily", "gzip", repositories=repositories) self.failUnlessEqual("daily", subversion.collectMode) self.failUnlessEqual("gzip", subversion.compressMode) self.failUnlessEqual(repositories, subversion.repositories) def testConstructor_006(self): """ Test assignment of collectMode attribute, None value. """ subversion = SubversionConfig(collectMode="daily") self.failUnlessEqual("daily", subversion.collectMode) subversion.collectMode = None self.failUnlessEqual(None, subversion.collectMode) def testConstructor_007(self): """ Test assignment of collectMode attribute, valid value. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.collectMode) subversion.collectMode = "weekly" self.failUnlessEqual("weekly", subversion.collectMode) def testConstructor_008(self): """ Test assignment of collectMode attribute, invalid value (empty). """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.collectMode) self.failUnlessAssignRaises(ValueError, subversion, "collectMode", "") self.failUnlessEqual(None, subversion.collectMode) def testConstructor_009(self): """ Test assignment of compressMode attribute, None value. """ subversion = SubversionConfig(compressMode="gzip") self.failUnlessEqual("gzip", subversion.compressMode) subversion.compressMode = None self.failUnlessEqual(None, subversion.compressMode) def testConstructor_010(self): """ Test assignment of compressMode attribute, valid value. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.compressMode) subversion.compressMode = "bzip2" self.failUnlessEqual("bzip2", subversion.compressMode) def testConstructor_011(self): """ Test assignment of compressMode attribute, invalid value (empty). """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.compressMode) self.failUnlessAssignRaises(ValueError, subversion, "compressMode", "") self.failUnlessEqual(None, subversion.compressMode) def testConstructor_012(self): """ Test assignment of repositories attribute, None value. """ subversion = SubversionConfig(repositories=[]) self.failUnlessEqual([], subversion.repositories) subversion.repositories = None self.failUnlessEqual(None, subversion.repositories) def testConstructor_013(self): """ Test assignment of repositories attribute, [] value. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) subversion.repositories = [] self.failUnlessEqual([], subversion.repositories) def testConstructor_014(self): """ Test assignment of repositories attribute, single valid entry. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) subversion.repositories = [ Repository(), ] self.failUnlessEqual([ Repository(), ], subversion.repositories) subversion.repositories.append(Repository(collectMode="daily")) self.failUnlessEqual([ Repository(), Repository(collectMode="daily"), ], subversion.repositories) def testConstructor_015(self): """ Test assignment of repositories attribute, multiple valid entries. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) subversion.repositories = [ Repository(collectMode="daily"), Repository(collectMode="weekly"), ] self.failUnlessEqual([ Repository(collectMode="daily"), Repository(collectMode="weekly"), ], subversion.repositories) subversion.repositories.append(Repository(collectMode="incr")) self.failUnlessEqual([ Repository(collectMode="daily"), Repository(collectMode="weekly"), Repository(collectMode="incr"), ], subversion.repositories) def testConstructor_016(self): """ Test assignment of repositories attribute, single invalid entry (None). """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) self.failUnlessAssignRaises(ValueError, subversion, "repositories", [None, ]) self.failUnlessEqual(None, subversion.repositories) def testConstructor_017(self): """ Test assignment of repositories attribute, single invalid entry (wrong type). """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) self.failUnlessAssignRaises(ValueError, subversion, "repositories", [SubversionConfig(), ]) self.failUnlessEqual(None, subversion.repositories) def testConstructor_018(self): """ Test assignment of repositories attribute, mixed valid and invalid entries. """ subversion = SubversionConfig() self.failUnlessEqual(None, subversion.repositories) self.failUnlessAssignRaises(ValueError, subversion, "repositories", [Repository(), SubversionConfig(), ]) self.failUnlessEqual(None, subversion.repositories) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ subversion1 = SubversionConfig() subversion2 = SubversionConfig() self.failUnlessEqual(subversion1, subversion2) self.failUnless(subversion1 == subversion2) self.failUnless(not subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(subversion1 >= subversion2) self.failUnless(not subversion1 != subversion2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, list None. """ subversion1 = SubversionConfig("daily", "gzip", None) subversion2 = SubversionConfig("daily", "gzip", None) self.failUnlessEqual(subversion1, subversion2) self.failUnless(subversion1 == subversion2) self.failUnless(not subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(subversion1 >= subversion2) self.failUnless(not subversion1 != subversion2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, list empty. """ subversion1 = SubversionConfig("daily", "gzip", []) subversion2 = SubversionConfig("daily", "gzip", []) self.failUnlessEqual(subversion1, subversion2) self.failUnless(subversion1 == subversion2) self.failUnless(not subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(subversion1 >= subversion2) self.failUnless(not subversion1 != subversion2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, list non-empty. """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), ]) self.failUnlessEqual(subversion1, subversion2) self.failUnless(subversion1 == subversion2) self.failUnless(not subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(subversion1 >= subversion2) self.failUnless(not subversion1 != subversion2) def testComparison_005(self): """ Test comparison of two differing objects, collectMode differs (one None). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(collectMode="daily") self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_006(self): """ Test comparison of two differing objects, collectMode differs. """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(), ]) subversion2 = SubversionConfig("weekly", "gzip", [ Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_007(self): """ Test comparison of two differing objects, compressMode differs (one None). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(compressMode="bzip2") self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_008(self): """ Test comparison of two differing objects, compressMode differs. """ subversion1 = SubversionConfig("daily", "bzip2", [ Repository(), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_009(self): """ Test comparison of two differing objects, repositories differs (one None, one empty). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(repositories=[]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_010(self): """ Test comparison of two differing objects, repositories differs (one None, one not empty). """ subversion1 = SubversionConfig() subversion2 = SubversionConfig(repositories=[Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_011(self): """ Test comparison of two differing objects, repositories differs (one empty, one not empty). """ subversion1 = SubversionConfig("daily", "gzip", [ ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_012(self): """ Test comparison of two differing objects, repositories differs (both not empty). """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(), Repository(), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) def testComparison_013(self): """ Test comparison of two differing objects, repositories differs (both not empty). """ subversion1 = SubversionConfig("daily", "gzip", [ Repository(repositoryType="other"), ]) subversion2 = SubversionConfig("daily", "gzip", [ Repository(repositoryType="type"), ]) self.failIfEqual(subversion1, subversion2) self.failUnless(not subversion1 == subversion2) self.failUnless(subversion1 < subversion2) self.failUnless(subversion1 <= subversion2) self.failUnless(not subversion1 > subversion2) self.failUnless(not subversion1 >= subversion2) self.failUnless(subversion1 != subversion2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the subversion configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.subversion) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.subversion) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["subversion.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of subversion attribute, None value. """ config = LocalConfig() config.subversion = None self.failUnlessEqual(None, config.subversion) def testConstructor_005(self): """ Test assignment of subversion attribute, valid value. """ config = LocalConfig() config.subversion = SubversionConfig() self.failUnlessEqual(SubversionConfig(), config.subversion) def testConstructor_006(self): """ Test assignment of subversion attribute, invalid value (not SubversionConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "subversion", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.subversion = SubversionConfig() config2 = LocalConfig() config2.subversion = SubversionConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, subversion differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.subversion = SubversionConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, subversion differs. """ config1 = LocalConfig() config1.subversion = SubversionConfig(collectMode="daily") config2 = LocalConfig() config2.subversion = SubversionConfig(collectMode="weekly") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None subversion section. """ config = LocalConfig() config.subversion = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty subversion section. """ config = LocalConfig() config.subversion = SubversionConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty subversion section, repositories=None. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty subversion section, repositories=[]. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", []) self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty subversion section, non-empty repositories, defaults set, no values on repositories. """ repositories = [ Repository(repositoryPath="/one"), Repository(repositoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_006(self): """ Test validate on a non-empty subversion section, non-empty repositories, no defaults set, no values on repositiories. """ repositories = [ Repository(repositoryPath="/one"), Repository(repositoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositories = repositories self.failUnlessRaises(ValueError, config.validate) def testValidate_007(self): """ Test validate on a non-empty subversion section, non-empty repositories, no defaults set, both values on repositories. """ repositories = [ Repository(repositoryPath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositories = repositories config.validate() def testValidate_008(self): """ Test validate on a non-empty subversion section, non-empty repositories, collectMode only on repositories. """ repositories = [ Repository(repositoryPath="/two", collectMode="weekly") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_009(self): """ Test validate on a non-empty subversion section, non-empty repositories, compressMode only on repositories. """ repositories = [ Repository(repositoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "weekly" config.subversion.repositories = repositories config.validate() def testValidate_010(self): """ Test validate on a non-empty subversion section, non-empty repositories, compressMode default and on repository. """ repositories = [ Repository(repositoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_011(self): """ Test validate on a non-empty subversion section, non-empty repositories, collectMode default and on repository. """ repositories = [ Repository(repositoryPath="/two", collectMode="daily") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_012(self): """ Test validate on a non-empty subversion section, non-empty repositories, collectMode and compressMode default and on repository. """ repositories = [ Repository(repositoryPath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositories = repositories config.validate() def testValidate_013(self): """ Test validate on a non-empty subversion section, repositoryDirs=None. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", repositoryDirs=None) self.failUnlessRaises(ValueError, config.validate) def testValidate_014(self): """ Test validate on a non-empty subversion section, repositoryDirs=[]. """ config = LocalConfig() config.subversion = SubversionConfig("weekly", "gzip", repositoryDirs=[]) self.failUnlessRaises(ValueError, config.validate) def testValidate_015(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, defaults set, no values on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/one"), RepositoryDir(directoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_016(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, no defaults set, no values on repositiories. """ repositoryDirs = [ RepositoryDir(directoryPath="/one"), RepositoryDir(directoryPath="/two") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositoryDirs = repositoryDirs self.failUnlessRaises(ValueError, config.validate) def testValidate_017(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, no defaults set, both values on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="weekly", compressMode="gzip") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_018(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, collectMode only on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="weekly") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_019(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, compressMode only on repositoryDirs. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "weekly" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_020(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, compressMode default and on repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_021(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, collectMode default and on repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="daily") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() def testValidate_022(self): """ Test validate on a non-empty subversion section, non-empty repositoryDirs, collectMode and compressMode default and on repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/two", collectMode="daily", compressMode="bzip2") ] config = LocalConfig() config.subversion = SubversionConfig() config.subversion.collectMode = "daily" config.subversion.compressMode = "gzip" config.subversion.repositoryDirs = repositoryDirs config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["subversion.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.subversion) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.subversion) def testParse_002(self): """ Parse config document with default modes, one repository. """ repositories = [ Repository(repositoryPath="/opt/public/svn/software"), ] path = self.resources["subversion.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) def testParse_003(self): """ Parse config document with no default modes, one repository """ repositories = [ Repository(repositoryPath="/opt/public/svn/software", collectMode="daily", compressMode="gzip"), ] path = self.resources["subversion.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual(None, config.subversion.collectMode) self.failUnlessEqual(None, config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual(None, config.subversion.collectMode) self.failUnlessEqual(None, config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) def testParse_004(self): """ Parse config document with default modes, several repositories with various overrides. """ repositories = [] repositories.append(Repository(repositoryPath="/opt/public/svn/one")) repositories.append(Repository(repositoryType="BDB", repositoryPath="/opt/public/svn/two", collectMode="weekly")) repositories.append(Repository(repositoryPath="/opt/public/svn/three", compressMode="bzip2")) repositories.append(Repository(repositoryType="FSFS", repositoryPath="/opt/public/svn/four", collectMode="incr", compressMode="bzip2")) path = self.resources["subversion.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(repositories, config.subversion.repositories) self.failUnlessEqual(None, config.subversion.repositoryDirs) def testParse_005(self): """ Parse config document with default modes, one repository. """ repositoryDirs = [ RepositoryDir(directoryPath="/opt/public/svn/software"), ] path = self.resources["subversion.conf.5"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) def testParse_006(self): """ Parse config document with no default modes, one repository """ repositoryDirs = [ RepositoryDir(directoryPath="/opt/public/svn/software", collectMode="daily", compressMode="gzip"), ] path = self.resources["subversion.conf.6"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual(None, config.subversion.collectMode) self.failUnlessEqual(None, config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual(None, config.subversion.collectMode) self.failUnlessEqual(None, config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) def testParse_007(self): """ Parse config document with default modes, several repositoryDirs with various overrides. """ repositoryDirs = [] repositoryDirs.append(RepositoryDir(directoryPath="/opt/public/svn/one")) repositoryDirs.append(RepositoryDir(repositoryType="BDB", directoryPath="/opt/public/svn/two", collectMode="weekly", relativeExcludePaths=["software", ])) repositoryDirs.append(RepositoryDir(directoryPath="/opt/public/svn/three", compressMode="bzip2", excludePatterns=[".*software.*", ])) repositoryDirs.append(RepositoryDir(repositoryType="FSFS", directoryPath="/opt/public/svn/four", collectMode="incr", compressMode="bzip2", relativeExcludePaths=["cedar", "banner", ], excludePatterns=[".*software.*", ".*database.*", ])) path = self.resources["subversion.conf.7"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.subversion) self.failUnlessEqual("daily", config.subversion.collectMode) self.failUnlessEqual("gzip", config.subversion.compressMode) self.failUnlessEqual(None, config.subversion.repositories) self.failUnlessEqual(repositoryDirs, config.subversion.repositoryDirs) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ subversion = SubversionConfig() config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_002(self): """ Test with defaults set, single repository with no optional values. """ repositories = [] repositories.append(Repository(repositoryPath="/path")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_003(self): """ Test with defaults set, single repository with collectMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="incr")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_004(self): """ Test with defaults set, single repository with compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", compressMode="bzip2")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_005(self): """ Test with defaults set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly", compressMode="bzip2")) subversion = SubversionConfig(collectMode="daily", compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_006(self): """ Test with no defaults set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly", compressMode="bzip2")) subversion = SubversionConfig(repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_007(self): """ Test with compressMode set, single repository with collectMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly")) subversion = SubversionConfig(compressMode="gzip", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_008(self): """ Test with collectMode set, single repository with compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", compressMode="gzip")) subversion = SubversionConfig(collectMode="weekly", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_009(self): """ Test with compressMode set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="incr", compressMode="gzip")) subversion = SubversionConfig(compressMode="bzip2", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_010(self): """ Test with collectMode set, single repository with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path", collectMode="weekly", compressMode="gzip")) subversion = SubversionConfig(collectMode="incr", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) def testAddConfig_011(self): """ Test with defaults set, multiple repositories with collectMode and compressMode set. """ repositories = [] repositories.append(Repository(repositoryPath="/path1", collectMode="daily", compressMode="gzip")) repositories.append(Repository(repositoryPath="/path2", collectMode="weekly", compressMode="gzip")) repositories.append(Repository(repositoryPath="/path3", collectMode="incr", compressMode="gzip")) repositories.append(Repository(repositoryPath="/path1", collectMode="daily", compressMode="bzip2")) repositories.append(Repository(repositoryPath="/path2", collectMode="weekly", compressMode="bzip2")) repositories.append(Repository(repositoryPath="/path3", collectMode="incr", compressMode="bzip2")) subversion = SubversionConfig(collectMode="incr", compressMode="bzip2", repositories=repositories) config = LocalConfig() config.subversion = subversion self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestBDBRepository, 'test'), unittest.makeSuite(TestFSFSRepository, 'test'), unittest.makeSuite(TestRepository, 'test'), unittest.makeSuite(TestRepositoryDir, 'test'), unittest.makeSuite(TestSubversionConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/mysqltests.py0000664000175000017500000011671612642026276022216 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2005-2006,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests MySQL extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/mysql.py. Code Coverage ============= This module contains individual tests for the many of the public functions and classes implemented in extend/mysql.py. There are also tests for several of the private methods. Unfortunately, it's rather difficult to test this code in an automated fashion, even if you have access to MySQL, since the actual dump would need to have access to a real database. Because of this, there aren't any tests below that actually talk to a database. As a compromise, I test some of the private methods in the implementation. Normally, I don't like to test private methods, but in this case, testing the private methods will help give us some reasonable confidence in the code even if we can't talk to a database.. This isn't perfect, but it's better than nothing. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a MYSQLTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.mysql import LocalConfig, MysqlConfig ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "mysql.conf.1", "mysql.conf.2", "mysql.conf.3", "mysql.conf.4", "mysql.conf.5", ] ####################################################################### # Test Case Classes ####################################################################### ######################## # TestMysqlConfig class ######################## class TestMysqlConfig(unittest.TestCase): """Tests for the MysqlConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = MysqlConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.user) self.failUnlessEqual(None, mysql.password) self.failUnlessEqual(None, mysql.compressMode) self.failUnlessEqual(False, mysql.all) self.failUnlessEqual(None, mysql.databases) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values, databases=None. """ mysql = MysqlConfig("user", "password", "none", False, None) self.failUnlessEqual("user", mysql.user) self.failUnlessEqual("password", mysql.password) self.failUnlessEqual("none", mysql.compressMode) self.failUnlessEqual(False, mysql.all) self.failUnlessEqual(None, mysql.databases) def testConstructor_003(self): """ Test constructor with all values filled in, with valid values, no databases. """ mysql = MysqlConfig("user", "password", "none", True, []) self.failUnlessEqual("user", mysql.user) self.failUnlessEqual("password", mysql.password) self.failUnlessEqual("none", mysql.compressMode) self.failUnlessEqual(True, mysql.all) self.failUnlessEqual([], mysql.databases) def testConstructor_004(self): """ Test constructor with all values filled in, with valid values, with one database. """ mysql = MysqlConfig("user", "password", "gzip", True, [ "one", ]) self.failUnlessEqual("user", mysql.user) self.failUnlessEqual("password", mysql.password) self.failUnlessEqual("gzip", mysql.compressMode) self.failUnlessEqual(True, mysql.all) self.failUnlessEqual([ "one", ], mysql.databases) def testConstructor_005(self): """ Test constructor with all values filled in, with valid values, with multiple databases. """ mysql = MysqlConfig("user", "password", "bzip2", True, [ "one", "two", ]) self.failUnlessEqual("user", mysql.user) self.failUnlessEqual("password", mysql.password) self.failUnlessEqual("bzip2", mysql.compressMode) self.failUnlessEqual(True, mysql.all) self.failUnlessEqual([ "one", "two", ], mysql.databases) def testConstructor_006(self): """ Test assignment of user attribute, None value. """ mysql = MysqlConfig(user="user") self.failUnlessEqual("user", mysql.user) mysql.user = None self.failUnlessEqual(None, mysql.user) def testConstructor_007(self): """ Test assignment of user attribute, valid value. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.user) mysql.user = "user" self.failUnlessEqual("user", mysql.user) def testConstructor_008(self): """ Test assignment of user attribute, invalid value (empty). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.user) self.failUnlessAssignRaises(ValueError, mysql, "user", "") self.failUnlessEqual(None, mysql.user) def testConstructor_009(self): """ Test assignment of password attribute, None value. """ mysql = MysqlConfig(password="password") self.failUnlessEqual("password", mysql.password) mysql.password = None self.failUnlessEqual(None, mysql.password) def testConstructor_010(self): """ Test assignment of password attribute, valid value. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.password) mysql.password = "password" self.failUnlessEqual("password", mysql.password) def testConstructor_011(self): """ Test assignment of password attribute, invalid value (empty). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.password) self.failUnlessAssignRaises(ValueError, mysql, "password", "") self.failUnlessEqual(None, mysql.password) def testConstructor_012(self): """ Test assignment of compressMode attribute, None value. """ mysql = MysqlConfig(compressMode="none") self.failUnlessEqual("none", mysql.compressMode) mysql.compressMode = None self.failUnlessEqual(None, mysql.compressMode) def testConstructor_013(self): """ Test assignment of compressMode attribute, valid value. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.compressMode) mysql.compressMode = "none" self.failUnlessEqual("none", mysql.compressMode) mysql.compressMode = "gzip" self.failUnlessEqual("gzip", mysql.compressMode) mysql.compressMode = "bzip2" self.failUnlessEqual("bzip2", mysql.compressMode) def testConstructor_014(self): """ Test assignment of compressMode attribute, invalid value (empty). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.compressMode) self.failUnlessAssignRaises(ValueError, mysql, "compressMode", "") self.failUnlessEqual(None, mysql.compressMode) def testConstructor_015(self): """ Test assignment of compressMode attribute, invalid value (not in list). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.compressMode) self.failUnlessAssignRaises(ValueError, mysql, "compressMode", "bogus") self.failUnlessEqual(None, mysql.compressMode) def testConstructor_016(self): """ Test assignment of all attribute, None value. """ mysql = MysqlConfig(all=True) self.failUnlessEqual(True, mysql.all) mysql.all = None self.failUnlessEqual(False, mysql.all) def testConstructor_017(self): """ Test assignment of all attribute, valid value (real boolean). """ mysql = MysqlConfig() self.failUnlessEqual(False, mysql.all) mysql.all = True self.failUnlessEqual(True, mysql.all) mysql.all = False self.failUnlessEqual(False, mysql.all) #pylint: disable=R0204 def testConstructor_018(self): """ Test assignment of all attribute, valid value (expression). """ mysql = MysqlConfig() self.failUnlessEqual(False, mysql.all) mysql.all = 0 self.failUnlessEqual(False, mysql.all) mysql.all = [] self.failUnlessEqual(False, mysql.all) mysql.all = None self.failUnlessEqual(False, mysql.all) mysql.all = ['a'] self.failUnlessEqual(True, mysql.all) mysql.all = 3 self.failUnlessEqual(True, mysql.all) def testConstructor_019(self): """ Test assignment of databases attribute, None value. """ mysql = MysqlConfig(databases=[]) self.failUnlessEqual([], mysql.databases) mysql.databases = None self.failUnlessEqual(None, mysql.databases) def testConstructor_020(self): """ Test assignment of databases attribute, [] value. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) mysql.databases = [] self.failUnlessEqual([], mysql.databases) def testConstructor_021(self): """ Test assignment of databases attribute, single valid entry. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) mysql.databases = ["/whatever", ] self.failUnlessEqual(["/whatever", ], mysql.databases) mysql.databases.append("/stuff") self.failUnlessEqual(["/whatever", "/stuff", ], mysql.databases) def testConstructor_022(self): """ Test assignment of databases attribute, multiple valid entries. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) mysql.databases = ["/whatever", "/stuff", ] self.failUnlessEqual(["/whatever", "/stuff", ], mysql.databases) mysql.databases.append("/etc/X11") self.failUnlessEqual(["/whatever", "/stuff", "/etc/X11", ], mysql.databases) def testConstructor_023(self): """ Test assignment of databases attribute, single invalid entry (empty). """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) self.failUnlessAssignRaises(ValueError, mysql, "databases", ["", ]) self.failUnlessEqual(None, mysql.databases) def testConstructor_024(self): """ Test assignment of databases attribute, mixed valid and invalid entries. """ mysql = MysqlConfig() self.failUnlessEqual(None, mysql.databases) self.failUnlessAssignRaises(ValueError, mysql, "databases", ["good", "", "alsogood", ]) self.failUnlessEqual(None, mysql.databases) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ mysql1 = MysqlConfig() mysql2 = MysqlConfig() self.failUnlessEqual(mysql1, mysql2) self.failUnless(mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(mysql1 >= mysql2) self.failUnless(not mysql1 != mysql2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None, list None. """ mysql1 = MysqlConfig("user", "password", "gzip", True, None) mysql2 = MysqlConfig("user", "password", "gzip", True, None) self.failUnlessEqual(mysql1, mysql2) self.failUnless(mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(mysql1 >= mysql2) self.failUnless(not mysql1 != mysql2) def testComparison_003(self): """ Test comparison of two identical objects, all attributes non-None, list empty. """ mysql1 = MysqlConfig("user", "password", "bzip2", True, []) mysql2 = MysqlConfig("user", "password", "bzip2", True, []) self.failUnlessEqual(mysql1, mysql2) self.failUnless(mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(mysql1 >= mysql2) self.failUnless(not mysql1 != mysql2) def testComparison_004(self): """ Test comparison of two identical objects, all attributes non-None, list non-empty. """ mysql1 = MysqlConfig("user", "password", "none", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "none", True, [ "whatever", ]) self.failUnlessEqual(mysql1, mysql2) self.failUnless(mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(mysql1 >= mysql2) self.failUnless(not mysql1 != mysql2) def testComparison_005(self): """ Test comparison of two differing objects, user differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(user="user") self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_006(self): """ Test comparison of two differing objects, user differs. """ mysql1 = MysqlConfig("user1", "password", "gzip", True, [ "whatever", ]) mysql2 = MysqlConfig("user2", "password", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_007(self): """ Test comparison of two differing objects, password differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(password="password") self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_008(self): """ Test comparison of two differing objects, password differs. """ mysql1 = MysqlConfig("user", "password1", "gzip", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password2", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_009(self): """ Test comparison of two differing objects, compressMode differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(compressMode="gzip") self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_010(self): """ Test comparison of two differing objects, compressMode differs. """ mysql1 = MysqlConfig("user", "password", "bzip2", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_011(self): """ Test comparison of two differing objects, all differs (one None). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(all=True) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_012(self): """ Test comparison of two differing objects, all differs. """ mysql1 = MysqlConfig("user", "password", "gzip", False, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_013(self): """ Test comparison of two differing objects, databases differs (one None, one empty). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(databases=[]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_014(self): """ Test comparison of two differing objects, databases differs (one None, one not empty). """ mysql1 = MysqlConfig() mysql2 = MysqlConfig(databases=["whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_015(self): """ Test comparison of two differing objects, databases differs (one empty, one not empty). """ mysql1 = MysqlConfig("user", "password", "gzip", True, [ ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(mysql1 < mysql2) self.failUnless(mysql1 <= mysql2) self.failUnless(not mysql1 > mysql2) self.failUnless(not mysql1 >= mysql2) self.failUnless(mysql1 != mysql2) def testComparison_016(self): """ Test comparison of two differing objects, databases differs (both not empty). """ mysql1 = MysqlConfig("user", "password", "gzip", True, [ "whatever", ]) mysql2 = MysqlConfig("user", "password", "gzip", True, [ "whatever", "bogus", ]) self.failIfEqual(mysql1, mysql2) self.failUnless(not mysql1 == mysql2) self.failUnless(not mysql1 < mysql2) # note: different than standard due to unsorted list self.failUnless(not mysql1 <= mysql2) # note: different than standard due to unsorted list self.failUnless(mysql1 > mysql2) # note: different than standard due to unsorted list self.failUnless(mysql1 >= mysql2) # note: different than standard due to unsorted list self.failUnless(mysql1 != mysql2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the mysql configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.mysql) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.mysql) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["mysql.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of mysql attribute, None value. """ config = LocalConfig() config.mysql = None self.failUnlessEqual(None, config.mysql) def testConstructor_005(self): """ Test assignment of mysql attribute, valid value. """ config = LocalConfig() config.mysql = MysqlConfig() self.failUnlessEqual(MysqlConfig(), config.mysql) def testConstructor_006(self): """ Test assignment of mysql attribute, invalid value (not MysqlConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "mysql", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.mysql = MysqlConfig() config2 = LocalConfig() config2.mysql = MysqlConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, mysql differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.mysql = MysqlConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, mysql differs. """ config1 = LocalConfig() config1.mysql = MysqlConfig(user="one") config2 = LocalConfig() config2.mysql = MysqlConfig(user="two") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None mysql section. """ config = LocalConfig() config.mysql = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty mysql section. """ config = LocalConfig() config.mysql = MysqlConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty mysql section, all=True, databases=None. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", True, None) config.validate() def testValidate_004(self): """ Test validate on a non-empty mysql section, all=True, empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "none", True, []) config.validate() def testValidate_005(self): """ Test validate on a non-empty mysql section, all=True, non-empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", True, ["whatever", ]) self.failUnlessRaises(ValueError, config.validate) def testValidate_006(self): """ Test validate on a non-empty mysql section, all=False, databases=None. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", False, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_007(self): """ Test validate on a non-empty mysql section, all=False, empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", False, []) self.failUnlessRaises(ValueError, config.validate) def testValidate_008(self): """ Test validate on a non-empty mysql section, all=False, non-empty databases. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", False, ["whatever", ]) config.validate() def testValidate_009(self): """ Test validate on a non-empty mysql section, with user=None. """ config = LocalConfig() config.mysql = MysqlConfig(None, "password", "gzip", True, None) config.validate() def testValidate_010(self): """ Test validate on a non-empty mysql section, with password=None. """ config = LocalConfig() config.mysql = MysqlConfig("user", None, "gzip", True, None) config.validate() def testValidate_011(self): """ Test validate on a non-empty mysql section, with user=None and password=None. """ config = LocalConfig() config.mysql = MysqlConfig(None, None, "gzip", True, None) config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["mysql.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.mysql) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.mysql) def testParse_003(self): """ Parse config document containing only a mysql section, no databases, all=True. """ path = self.resources["mysql.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("none", config.mysql.compressMode) self.failUnlessEqual(True, config.mysql.all) self.failUnlessEqual(None, config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("none", config.mysql.compressMode) self.failIfEqual(None, config.mysql.password) self.failUnlessEqual(True, config.mysql.all) self.failUnlessEqual(None, config.mysql.databases) def testParse_004(self): """ Parse config document containing only a mysql section, single database, all=False. """ path = self.resources["mysql.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("gzip", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database", ], config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("gzip", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database", ], config.mysql.databases) def testParse_005(self): """ Parse config document containing only a mysql section, multiple databases, all=False. """ path = self.resources["mysql.conf.4"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("bzip2", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database1", "database2", ], config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual("user", config.mysql.user) self.failUnlessEqual("password", config.mysql.password) self.failUnlessEqual("bzip2", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database1", "database2", ], config.mysql.databases) def testParse_006(self): """ Parse config document containing only a mysql section, no user or password, multiple databases, all=False. """ path = self.resources["mysql.conf.5"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual(None, config.mysql.user) self.failUnlessEqual(None, config.mysql.password) self.failUnlessEqual("bzip2", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database1", "database2", ], config.mysql.databases) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.mysql) self.failUnlessEqual(None, config.mysql.user) self.failUnlessEqual(None, config.mysql.password) self.failUnlessEqual("bzip2", config.mysql.compressMode) self.failUnlessEqual(False, config.mysql.all) self.failUnlessEqual(["database1", "database2", ], config.mysql.databases) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document """ config = LocalConfig() self.validateAddConfig(config) def testAddConfig_003(self): """ Test with no databases, all other values filled in, all=True. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "none", True, None) self.validateAddConfig(config) def testAddConfig_004(self): """ Test with no databases, all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", False, None) self.validateAddConfig(config) def testAddConfig_005(self): """ Test with single database, all other values filled in, all=True. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", True, [ "database", ]) self.validateAddConfig(config) def testAddConfig_006(self): """ Test with single database, all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "none", False, [ "database", ]) self.validateAddConfig(config) def testAddConfig_007(self): """ Test with multiple databases, all other values filled in, all=True. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "bzip2", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_008(self): """ Test with multiple databases, all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", "password", "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_009(self): """ Test with multiple databases, user=None but all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig(None, "password", "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_010(self): """ Test with multiple databases, password=None but all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig("user", None, "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) def testAddConfig_011(self): """ Test with multiple databases, user=None and password=None but all other values filled in, all=False. """ config = LocalConfig() config.mysql = MysqlConfig(None, None, "gzip", True, [ "database1", "database2", ]) self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestMysqlConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/writersutiltests.py0000664000175000017500000020465312642026326023440 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2007,2010,2011 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests writer utility functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/writers/util.py. Code Coverage ============= This module contains individual tests for the public functions and classes implemented in writers/util.py. I usually prefer to test only the public interface to a class, because that way the regression tests don't depend on the internal implementation. In this case, I've decided to test some of the private methods, because their "privateness" is more a matter of presenting a clean external interface than anything else (most of the private methods are static). Being able to test these methods also makes it easier to gain some reasonable confidence in the code even if some tests are not run because WRITERSUTILTESTS_FULL is not set to "Y" in the environment (see below). Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set WRITERSUTILTESTS_FULL to "Y" in the environment. In this module, there are three dependencies: the system must have C{mkisofs} installed, the kernel must allow ISO images to be mounted in-place via a loopback mechanism, and the current user must be allowed (via C{sudo}) to mount and unmount such loopback filesystems. See documentation by the L{TestIsoImage.mountImage} and L{TestIsoImage.unmountImage} methods for more information on what C{sudo} access is required. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import os import unittest import tempfile import time from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar from CedarBackup2.testutil import platformMacOsX, platformSupportsLinks from CedarBackup2.filesystem import FilesystemList from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed, IsoImage from CedarBackup2.util import executeCommand ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "tree9.tar.gz", ] SUDO_CMD = [ "sudo", ] HDIUTIL_CMD = [ "hdiutil", ] GCONF_CMD = [ "gconftool-2", ] INVALID_FILE = "bogus" # This file name should never exist ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "WRITERSUTILTESTS_FULL" in os.environ: return os.environ["WRITERSUTILTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ######################## # Test validateScsiId() ######################## def testValidateScsiId_001(self): """ Test with simple scsibus,target,lun address. """ scsiId = "0,0,0" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_002(self): """ Test with simple scsibus,target,lun address containing spaces. """ scsiId = " 0, 0, 0 " result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_003(self): """ Test with simple ATA address. """ scsiId = "ATA:3,2,1" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_004(self): """ Test with simple ATA address containing spaces. """ scsiId = "ATA: 3, 2,1 " result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_005(self): """ Test with simple ATAPI address. """ scsiId = "ATAPI:1,2,3" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_006(self): """ Test with simple ATAPI address containing spaces. """ scsiId = " ATAPI:1, 2, 3" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_007(self): """ Test with default-device Mac address. """ scsiId = "IOCompactDiscServices" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_008(self): """ Test with an alternate-device Mac address. """ scsiId = "IOCompactDiscServices/2" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_009(self): """ Test with an alternate-device Mac address. """ scsiId = "IOCompactDiscServices/12" result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) def testValidateScsiId_010(self): """ Test with an invalid address with a missing field. """ scsiId = "1,2" self.failUnlessRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_011(self): """ Test with an invalid Mac-style address with a backslash. """ scsiId = "IOCompactDiscServices\\3" self.failUnlessRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_012(self): """ Test with an invalid address with an invalid prefix separator. """ scsiId = "ATAPI;1,2,3" self.failUnlessRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_013(self): """ Test with an invalid address with an invalid prefix separator. """ scsiId = "ATA-1,2,3" self.failUnlessRaises(ValueError, validateScsiId, scsiId) def testValidateScsiId_014(self): """ Test with a None SCSI id. """ scsiId = None result = validateScsiId(scsiId) self.failUnlessEqual(scsiId, result) ############################ # Test validateDriveSpeed() ############################ #pylint: disable=R0204 def testValidateDriveSpeed_001(self): """ Test for a valid drive speed. """ speed = 1 result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) speed = 2 result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) speed = 30 result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) speed = 2.0 result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) speed = 1.3 result = validateDriveSpeed(speed) self.failUnlessEqual(result, 1) # truncated def testValidateDriveSpeed_002(self): """ Test for a None drive speed (special case). """ speed = None result = validateDriveSpeed(speed) self.failUnlessEqual(result, speed) def testValidateDriveSpeed_003(self): """ Test for an invalid drive speed (zero) """ speed = 0 self.failUnlessRaises(ValueError, validateDriveSpeed, speed) def testValidateDriveSpeed_004(self): """ Test for an invalid drive speed (negative) """ speed = -1 self.failUnlessRaises(ValueError, validateDriveSpeed, speed) def testValidateDriveSpeed_005(self): """ Test for an invalid drive speed (not integer) """ speed = "ken" self.failUnlessRaises(ValueError, validateDriveSpeed, speed) ##################### # TestIsoImage class ##################### class TestIsoImage(unittest.TestCase): """Tests for the IsoImage class.""" ################ # Setup methods ################ def setUp(self): try: self.disableGnomeAutomount() self.mounted = False self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): if self.mounted: self.unmountImage() removedir(self.tmpdir) self.enableGnomeAutomount() ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def mountImage(self, imagePath): """ Mounts an ISO image at C{self.tmpdir/mnt} using loopback. This function chooses the correct operating system-specific function and calls it. If there is no operating-system-specific function, we fall back to the generic function, which uses 'sudo mount'. @return: Path the image is mounted at. @raise IOError: If the command cannot be executed. """ if platformMacOsX(): return self.mountImageDarwin(imagePath) else: return self.mountImageGeneric(imagePath) def mountImageDarwin(self, imagePath): """ Mounts an ISO image at C{self.tmpdir/mnt} using Darwin's C{hdiutil} program. Darwin (Mac OS X) uses the C{hdiutil} program to mount volumes. The mount command doesn't really exist (or rather, doesn't know what to do with ISO 9660 volumes). @note: According to the manpage, the mountpoint path can't be any longer than MNAMELEN characters (currently 90?) so you might have problems with this depending on how your test environment is set up. @return: Path the image is mounted at. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) os.mkdir(mountPath) args = [ "attach", "-mountpoint", mountPath, imagePath, ] (result, output) = executeCommand(HDIUTIL_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to mount image." % result) self.mounted = True return mountPath def mountImageGeneric(self, imagePath): """ Mounts an ISO image at C{self.tmpdir/mnt} using loopback. Note that this will fail unless the user has been granted permissions via sudo, using something like this: Cmnd_Alias LOOPMOUNT = /bin/mount -d -t iso9660 -o loop * * Keep in mind that this entry is a security hole, so you might not want to keep it in C{/etc/sudoers} all of the time. @return: Path the image is mounted at. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) os.mkdir(mountPath) args = [ "mount", "-t", "iso9660", "-o", "loop", imagePath, mountPath, ] (result, output) = executeCommand(SUDO_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to mount image." % result) self.mounted = True return mountPath def unmountImage(self): """ Unmounts an ISO image from C{self.tmpdir/mnt}. This function chooses the correct operating system-specific function and calls it. If there is no operating-system-specific function, we fall back to the generic function, which uses 'sudo unmount'. @raise IOError: If the command cannot be executed. """ if platformMacOsX(): self.unmountImageDarwin() else: self.unmountImageGeneric() def unmountImageDarwin(self): """ Unmounts an ISO image from C{self.tmpdir/mnt} using Darwin's C{hdiutil} program. Darwin (Mac OS X) uses the C{hdiutil} program to mount volumes. The mount command doesn't really exist (or rather, doesn't know what to do with ISO 9660 volumes). @note: According to the manpage, the mountpoint path can't be any longer than MNAMELEN characters (currently 90?) so you might have problems with this depending on how your test environment is set up. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) args = [ "detach", mountPath, ] (result, output) = executeCommand(HDIUTIL_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to unmount image." % result) self.mounted = False def unmountImageGeneric(self): """ Unmounts an ISO image from C{self.tmpdir/mnt}. Sometimes, multiple tries are needed because the ISO filesystem is still in use. We try twice with a 1-second pause between attempts. If this isn't successful, you may run out of loopback devices. Check for leftover mounts using 'losetup -a' as root. You can remove a leftover mount using something like 'losetup -d /dev/loop0'. Note that this will fail unless the user has been granted permissions via sudo, using something like this: Cmnd_Alias LOOPUNMOUNT = /bin/umount -d -t iso9660 * Keep in mind that this entry is a security hole, so you might not want to keep it in C{/etc/sudoers} all of the time. @raise IOError: If the command cannot be executed. """ mountPath = self.buildPath([ "mnt", ]) args = [ "umount", "-d", "-t", "iso9660", mountPath, ] (result, output) = executeCommand(SUDO_CMD, args, returnOutput=True) if result != 0: time.sleep(1) (result, output) = executeCommand(SUDO_CMD, args, returnOutput=True) if result != 0: raise IOError("Error (%d) executing command to unmount image." % result) self.mounted = False def disableGnomeAutomount(self): """ Disables GNOME auto-mounting of ISO volumes when full tests are enabled. As of this writing (October 2011), recent versions of GNOME in Debian come pre-configured to auto-mount various kinds of media (like CDs and thumb drives). Besides auto-mounting the media, GNOME also often opens up a Nautilus browser window to explore the newly-mounted media. This causes lots of problems for these unit tests, which assume that they have complete control over the mounting and unmounting process. So, for these tests to work, we need to disable GNOME auto-mounting. """ self.origMediaAutomount = None self.origMediaAutomountOpen = None if runAllTests(): args = [ "--get", "/apps/nautilus/preferences/media_automount", ] (result, output) = executeCommand(GCONF_CMD, args, returnOutput=True) if result == 0: self.origMediaAutomount = output[0][:-1] # pylint: disable=W0201 if self.origMediaAutomount == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount", "false", ] executeCommand(GCONF_CMD, args) args = [ "--get", "/apps/nautilus/preferences/media_automount_open", ] (result, output) = executeCommand(GCONF_CMD, args, returnOutput=True) if result == 0: self.origMediaAutomountOpen = output[0][:-1] # pylint: disable=W0201 if self.origMediaAutomountOpen == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount_open", "false", ] executeCommand(GCONF_CMD, args) def enableGnomeAutomount(self): """ Resets GNOME auto-mounting options back to their state prior to disableGnomeAutomount(). """ if self.origMediaAutomount == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount", "true", ] executeCommand(GCONF_CMD, args) if self.origMediaAutomountOpen == "true": args = [ "--type", "bool", "--set", "/apps/nautilus/preferences/media_automount_open", "true", ] executeCommand(GCONF_CMD, args) ################### # Test constructor ################### def testConstructor_001(self): """ Test the constructor using all default arguments. """ isoImage = IsoImage() self.failUnlessEqual(None, isoImage.device) self.failUnlessEqual(None, isoImage.boundaries) self.failUnlessEqual(None, isoImage.graftPoint) self.failUnlessEqual(True, isoImage.useRockRidge) self.failUnlessEqual(None, isoImage.applicationId) self.failUnlessEqual(None, isoImage.biblioFile) self.failUnlessEqual(None, isoImage.publisherId) self.failUnlessEqual(None, isoImage.preparerId) self.failUnlessEqual(None, isoImage.volumeId) def testConstructor_002(self): """ Test the constructor using non-default arguments. """ isoImage = IsoImage("/dev/cdrw", boundaries=(1, 2), graftPoint="/france") self.failUnlessEqual("/dev/cdrw", isoImage.device) self.failUnlessEqual((1, 2), isoImage.boundaries) self.failUnlessEqual("/france", isoImage.graftPoint) self.failUnlessEqual(True, isoImage.useRockRidge) self.failUnlessEqual(None, isoImage.applicationId) self.failUnlessEqual(None, isoImage.biblioFile) self.failUnlessEqual(None, isoImage.publisherId) self.failUnlessEqual(None, isoImage.preparerId) self.failUnlessEqual(None, isoImage.volumeId) ################################ # Test IsoImage utility methods ################################ def testUtilityMethods_001(self): """ Test _buildDirEntries() with an empty entries dictionary. """ entries = {} result = IsoImage._buildDirEntries(entries) self.failUnlessEqual(0, len(result)) def testUtilityMethods_002(self): """ Test _buildDirEntries() with an entries dictionary that has no graft points. """ entries = {} entries["/one/two/three"] = None entries["/four/five/six"] = None entries["/seven/eight/nine"] = None result = IsoImage._buildDirEntries(entries) self.failUnlessEqual(3, len(result)) self.failUnless("/one/two/three" in result) self.failUnless("/four/five/six" in result) self.failUnless("/seven/eight/nine" in result) def testUtilityMethods_003(self): """ Test _buildDirEntries() with an entries dictionary that has all graft points. """ entries = {} entries["/one/two/three"] = "/backup1" entries["/four/five/six"] = "backup2" entries["/seven/eight/nine"] = "backup3" result = IsoImage._buildDirEntries(entries) self.failUnlessEqual(3, len(result)) self.failUnless("backup1/=/one/two/three" in result) self.failUnless("backup2/=/four/five/six" in result) self.failUnless("backup3/=/seven/eight/nine" in result) def testUtilityMethods_004(self): """ Test _buildDirEntries() with an entries dictionary that has mixed graft points and not. """ entries = {} entries["/one/two/three"] = "backup1" entries["/four/five/six"] = None entries["/seven/eight/nine"] = "/backup3" result = IsoImage._buildDirEntries(entries) self.failUnlessEqual(3, len(result)) self.failUnless("backup1/=/one/two/three" in result) self.failUnless("/four/five/six" in result) self.failUnless("backup3/=/seven/eight/nine" in result) def testUtilityMethods_005(self): """ Test _buildGeneralArgs() with all optional values as None. """ isoImage = IsoImage() result = isoImage._buildGeneralArgs() self.failUnlessEqual(0, len(result)) def testUtilityMethods_006(self): """ Test _buildGeneralArgs() with applicationId set. """ isoImage = IsoImage() isoImage.applicationId = "one" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-A", "one", ], result) def testUtilityMethods_007(self): """ Test _buildGeneralArgs() with biblioFile set. """ isoImage = IsoImage() isoImage.biblioFile = "two" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-biblio", "two", ], result) def testUtilityMethods_008(self): """ Test _buildGeneralArgs() with publisherId set. """ isoImage = IsoImage() isoImage.publisherId = "three" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-publisher", "three", ], result) def testUtilityMethods_009(self): """ Test _buildGeneralArgs() with preparerId set. """ isoImage = IsoImage() isoImage.preparerId = "four" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-p", "four", ], result) def testUtilityMethods_010(self): """ Test _buildGeneralArgs() with volumeId set. """ isoImage = IsoImage() isoImage.volumeId = "five" result = isoImage._buildGeneralArgs() self.failUnlessEqual(["-V", "five", ], result) def testUtilityMethods_011(self): """ Test _buildSizeArgs() with device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_012(self): """ Test _buildSizeArgs() with useRockRidge set to True and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = True result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_013(self): """ Test _buildSizeArgs() with useRockRidge set to False and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = False result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "backup1/=/one/two/three", ], result) def testUtilityMethods_014(self): """ Test _buildSizeArgs() with device as None and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device=None, boundaries=(1, 2)) result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_015(self): """ Test _buildSizeArgs() with device as non-None and boundaries as None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=None) result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "backup1/=/one/two/three", ], result) def testUtilityMethods_016(self): """ Test _buildSizeArgs() with device and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=(1, 2)) result = isoImage._buildSizeArgs(entries) self.failUnlessEqual(["-print-size", "-graft-points", "-r", "-C", "1,2", "-M", "/dev/cdrw", "backup1/=/one/two/three", ], result) def testUtilityMethods_017(self): """ Test _buildWriteArgs() with device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-r", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_018(self): """ Test _buildWriteArgs() with useRockRidge set to True and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = True result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-r", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_019(self): """ Test _buildWriteArgs() with useRockRidge set to False and device and boundaries at defaults. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage() isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_020(self): """ Test _buildWriteArgs() with device as None and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device=None, boundaries=(3, 4)) isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_021(self): """ Test _buildWriteArgs() with device as non-None and boundaries as None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=None) isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-o", "/tmp/file.iso", "backup1/=/one/two/three", ], result) def testUtilityMethods_022(self): """ Test _buildWriteArgs() with device and boundaries as non-None. """ entries = {} entries["/one/two/three"] = "backup1" isoImage = IsoImage(device="/dev/cdrw", boundaries=(3, 4)) isoImage.useRockRidge = False result = isoImage._buildWriteArgs(entries, "/tmp/file.iso") self.failUnlessEqual(["-graft-points", "-o", "/tmp/file.iso", "-C", "3,4", "-M", "/dev/cdrw", "backup1/=/one/two/three", ], result) ################## # Test addEntry() ################## def testAddEntry_001(self): """ Attempt to add a non-existent entry. """ file1 = self.buildPath([ INVALID_FILE, ]) isoImage = IsoImage() self.failUnlessRaises(ValueError, isoImage.addEntry, file1) def testAddEntry_002(self): """ Attempt to add a an entry that is a soft link to a file. """ if platformSupportsLinks(): self.extractTar("tree9") file1 = self.buildPath([ "tree9", "dir002", "link003", ]) isoImage = IsoImage() self.failUnlessRaises(ValueError, isoImage.addEntry, file1) def testAddEntry_003(self): """ Attempt to add a an entry that is a soft link to a directory """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "link001", ]) isoImage = IsoImage() self.failUnlessRaises(ValueError, isoImage.addEntry, file1) def testAddEntry_004(self): """ Attempt to add a file, no graft point set. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1) self.failUnlessEqual({ file1:None, }, isoImage.entries) def testAddEntry_005(self): """ Attempt to add a file, graft point set on the object level. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1) self.failUnlessEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_006(self): """ Attempt to add a file, graft point set on the method level. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff") self.failUnlessEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_007(self): """ Attempt to add a file, graft point set on the object and method levels. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff") self.failUnlessEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_008(self): """ Attempt to add a file, graft point set on the object and method levels, where method value is None (which can't be distinguished from the method value being unset). """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint=None) self.failUnlessEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_009(self): """ Attempt to add a directory, no graft point set. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1) self.failUnlessEqual({ dir1:os.path.basename(dir1), }, isoImage.entries) def testAddEntry_010(self): """ Attempt to add a directory, graft point set on the object level. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1) self.failUnlessEqual({ dir1:os.path.join("p", "tree9") }, isoImage.entries) def testAddEntry_011(self): """ Attempt to add a directory, graft point set on the method level. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s") self.failUnlessEqual({ dir1:os.path.join("s", "tree9"), }, isoImage.entries) def testAddEntry_012(self): """ Attempt to add a file, no graft point set, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, contentsOnly=True) self.failUnlessEqual({ file1:None, }, isoImage.entries) def testAddEntry_013(self): """ Attempt to add a file, graft point set on the object level, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, contentsOnly=True) self.failUnlessEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_014(self): """ Attempt to add a file, graft point set on the method level, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff", contentsOnly=True) self.failUnlessEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_015(self): """ Attempt to add a file, graft point set on the object and method levels, contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="stuff", contentsOnly=True) self.failUnlessEqual({ file1:"stuff", }, isoImage.entries) def testAddEntry_016(self): """ Attempt to add a file, graft point set on the object and method levels, where method value is None (which can't be distinguished from the method value being unset), contentsOnly=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint=None, contentsOnly=True) self.failUnlessEqual({ file1:"whatever", }, isoImage.entries) def testAddEntry_017(self): """ Attempt to add a directory, no graft point set, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, contentsOnly=True) self.failUnlessEqual({ dir1:None, }, isoImage.entries) def testAddEntry_018(self): """ Attempt to add a directory, graft point set on the object level, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, contentsOnly=True) self.failUnlessEqual({ dir1:"p" }, isoImage.entries) def testAddEntry_019(self): """ Attempt to add a directory, graft point set on the method level, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s", contentsOnly=True) self.failUnlessEqual({ dir1:"s", }, isoImage.entries) def testAddEntry_020(self): """ Attempt to add a directory, graft point set on the object and methods levels, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s", contentsOnly=True) self.failUnlessEqual({ dir1:"s", }, isoImage.entries) def testAddEntry_021(self): """ Attempt to add a directory, graft point set on the object and methods levels, contentsOnly=True. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9" ]) isoImage = IsoImage(graftPoint="p") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(dir1, graftPoint="s", contentsOnly=True) self.failUnlessEqual({ dir1:"s", }, isoImage.entries) def testAddEntry_022(self): """ Attempt to add a file that has already been added, override=False. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1) self.failUnlessEqual({ file1:None, }, isoImage.entries) self.failUnlessRaises(ValueError, isoImage.addEntry, file1, override=False) self.failUnlessEqual({ file1:None, }, isoImage.entries) def testAddEntry_023(self): """ Attempt to add a file that has already been added, override=True. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage() self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1) self.failUnlessEqual({ file1:None, }, isoImage.entries) isoImage.addEntry(file1, override=True) self.failUnlessEqual({ file1:None, }, isoImage.entries) def testAddEntry_024(self): """ Attempt to add a directory that has already been added, override=False, changing the graft point. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="one") self.failUnlessEqual({ file1:"one", }, isoImage.entries) self.failUnlessRaises(ValueError, isoImage.addEntry, file1, graftPoint="two", override=False) self.failUnlessEqual({ file1:"one", }, isoImage.entries) def testAddEntry_025(self): """ Attempt to add a directory that has already been added, override=True, changing the graft point. """ self.extractTar("tree9") file1 = self.buildPath([ "tree9", "file001", ]) isoImage = IsoImage(graftPoint="whatever") self.failUnlessEqual({}, isoImage.entries) isoImage.addEntry(file1, graftPoint="one") self.failUnlessEqual({ file1:"one", }, isoImage.entries) isoImage.addEntry(file1, graftPoint="two", override=True) self.failUnlessEqual({ file1:"two", }, isoImage.entries) ########################## # Test getEstimatedSize() ########################## def testGetEstimatedSize_001(self): """ Test with an empty list. """ self.extractTar("tree9") isoImage = IsoImage() self.failUnlessRaises(ValueError, isoImage.getEstimatedSize) def testGetEstimatedSize_002(self): """ Test with non-empty empty list. """ self.extractTar("tree9") dir1 = self.buildPath([ "tree9", ]) isoImage = IsoImage() isoImage.addEntry(dir1, graftPoint="base") result = isoImage.getEstimatedSize() self.failUnless(result > 0) #################### # Test writeImage() #################### def testWriteImage_001(self): """ Attempt to write an image containing no entries. """ isoImage = IsoImage() imagePath = self.buildPath([ "image.iso", ]) self.failUnlessRaises(ValueError, isoImage.writeImage, imagePath) def testWriteImage_002(self): """ Attempt to write an image containing only an empty directory, no graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "dir002") in fsList) def testWriteImage_003(self): """ Attempt to write an image containing only an empty directory, with a graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="base") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(3, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base") in fsList) self.failUnless(os.path.join(mountPath, "base", "dir002") in fsList) def testWriteImage_004(self): """ Attempt to write an image containing only a non-empty directory, no graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(10, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "dir002") in fsList) self.failUnless(os.path.join(mountPath, "dir002", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", "dir002", ) in fsList) def testWriteImage_005(self): """ Attempt to write an image containing only a non-empty directory, with a graft point. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint=os.path.join("something", "else")) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(12, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "something", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002") in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", "dir002", ) in fsList) def testWriteImage_006(self): """ Attempt to write an image containing only a file, no graft point. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) def testWriteImage_007(self): """ Attempt to write an image containing only a file, with a graft point. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="point") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(3, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "point", ) in fsList) self.failUnless(os.path.join(mountPath, "point", "file001", ) in fsList) def testWriteImage_008(self): """ Attempt to write an image containing a file and an empty directory, no graft points. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(3, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", ) in fsList) def testWriteImage_009(self): """ Attempt to write an image containing a file and an empty directory, with graft points. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="other") isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(5, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "other", ) in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "other", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir002", ) in fsList) def testWriteImage_010(self): """ Attempt to write an image containing a file and a non-empty directory, mixed graft points. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint=None) isoImage.addEntry(dir1) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(11, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "dir002", ) in fsList) def testWriteImage_011(self): """ Attempt to write an image containing several files and a non-empty directory, mixed graft points. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) file2 = self.buildPath([ "tree9", "file002" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1) isoImage.addEntry(file2, graftPoint="other") isoImage.addEntry(dir1, graftPoint="base") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(13, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "other", ) in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "other", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", "dir002", ) in fsList) def testWriteImage_012(self): """ Attempt to write an image containing a deeply-nested directory. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="something") isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(24, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "something", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir001", "dir002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "tree9", "dir002", "dir002", ) in fsList) def testWriteImage_013(self): """ Attempt to write an image containing only an empty directory, no graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(1, len(fsList)) self.failUnless(mountPath in fsList) def testWriteImage_014(self): """ Attempt to write an image containing only an empty directory, with a graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="base", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base") in fsList) def testWriteImage_015(self): """ Attempt to write an image containing only a non-empty directory, no graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(9, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "dir002", ) in fsList) def testWriteImage_016(self): """ Attempt to write an image containing only a non-empty directory, with a graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", "dir002" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint=os.path.join("something", "else"), contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(11, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "something", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "else", "dir002", ) in fsList) def testWriteImage_017(self): """ Attempt to write an image containing only a file, no graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) def testWriteImage_018(self): """ Attempt to write an image containing only a file, with a graft point, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="point", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(3, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "point", ) in fsList) self.failUnless(os.path.join(mountPath, "point", "file001", ) in fsList) def testWriteImage_019(self): """ Attempt to write an image containing a file and an empty directory, no graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, contentsOnly=True) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(2, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) def testWriteImage_020(self): """ Attempt to write an image containing a file and an empty directory, with graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", "dir002", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint="other", contentsOnly=True) isoImage.addEntry(dir1, contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(4, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "other", ) in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "other", "file001", ) in fsList) def testWriteImage_021(self): """ Attempt to write an image containing a file and a non-empty directory, mixed graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage(graftPoint="base") file1 = self.buildPath([ "tree9", "file001" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, graftPoint=None, contentsOnly=True) isoImage.addEntry(dir1, contentsOnly=True) self.failUnlessRaises(IOError, isoImage.writeImage, imagePath) # ends up with a duplicate name def testWriteImage_022(self): """ Attempt to write an image containing several files and a non-empty directory, mixed graft points, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() file1 = self.buildPath([ "tree9", "file001" ]) file2 = self.buildPath([ "tree9", "file002" ]) dir1 = self.buildPath([ "tree9", "dir001", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(file1, contentsOnly=True) isoImage.addEntry(file2, graftPoint="other", contentsOnly=True) isoImage.addEntry(dir1, graftPoint="base", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(12, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "base", ) in fsList) self.failUnless(os.path.join(mountPath, "other", ) in fsList) self.failUnless(os.path.join(mountPath, "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "other", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "base", "dir002", ) in fsList) def testWriteImage_023(self): """ Attempt to write an image containing a deeply-nested directory, contentsOnly=True. """ self.extractTar("tree9") isoImage = IsoImage() dir1 = self.buildPath([ "tree9", ]) imagePath = self.buildPath([ "image.iso", ]) isoImage.addEntry(dir1, graftPoint="something", contentsOnly=True) isoImage.writeImage(imagePath) mountPath = self.mountImage(imagePath) fsList = FilesystemList() fsList.addDirContents(mountPath) self.failUnlessEqual(23, len(fsList)) self.failUnless(mountPath in fsList) self.failUnless(os.path.join(mountPath, "something", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir001", "dir002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "file001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "file002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "link001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "link002", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "link003", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "link004", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "dir001", ) in fsList) self.failUnless(os.path.join(mountPath, "something", "dir002", "dir002", ) in fsList) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), unittest.makeSuite(TestIsoImage, 'test'), )) else: return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), unittest.makeSuite(TestIsoImage, 'testConstructor'), unittest.makeSuite(TestIsoImage, 'testUtilityMethods'), unittest.makeSuite(TestIsoImage, 'testAddEntry'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/amazons3tests.py0000664000175000017500000010376512642026345022601 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2014-2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests amazons3 extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/amazons3.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/amazons3.py. There are also tests for some of the private functions. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest # Cedar Backup modules from CedarBackup2.util import UNIT_BYTES, UNIT_MBYTES, UNIT_GBYTES from CedarBackup2.config import ByteQuantity from CedarBackup2.testutil import findResources, failUnlessAssignRaises from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.amazons3 import LocalConfig, AmazonS3Config ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "amazons3.conf.1", "amazons3.conf.2", "amazons3.conf.3", "tree1.tar.gz", "tree2.tar.gz", "tree8.tar.gz", "tree15.tar.gz", "tree16.tar.gz", "tree17.tar.gz", "tree18.tar.gz", "tree19.tar.gz", "tree20.tar.gz", ] ####################################################################### # Test Case Classes ####################################################################### ########################## # TestAmazonS3Config class ########################## class TestAmazonS3Config(unittest.TestCase): """Tests for the AmazonS3Config class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = AmazonS3Config() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ amazons3 = AmazonS3Config() self.failUnlessEqual(False, amazons3.warnMidnite) self.failUnlessEqual(None, amazons3.s3Bucket) self.failUnlessEqual(None, amazons3.encryptCommand) self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ amazons3 = AmazonS3Config(True, "bucket", "encrypt", 1, 2) self.failUnlessEqual(True, amazons3.warnMidnite) self.failUnlessEqual("bucket", amazons3.s3Bucket) self.failUnlessEqual("encrypt", amazons3.encryptCommand) self.failUnlessEqual(1L, amazons3.fullBackupSizeLimit) self.failUnlessEqual(2L, amazons3.incrementalBackupSizeLimit) def testConstructor_003(self): """ Test assignment of warnMidnite attribute, valid value (real boolean). """ amazons3 = AmazonS3Config() self.failUnlessEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = True self.failUnlessEqual(True, amazons3.warnMidnite) amazons3.warnMidnite = False self.failUnlessEqual(False, amazons3.warnMidnite) #pylint: disable=R0204 def testConstructor_004(self): """ Test assignment of warnMidnite attribute, valid value (expression). """ amazons3 = AmazonS3Config() self.failUnlessEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = 0 self.failUnlessEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = [] self.failUnlessEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = None self.failUnlessEqual(False, amazons3.warnMidnite) amazons3.warnMidnite = ['a'] self.failUnlessEqual(True, amazons3.warnMidnite) amazons3.warnMidnite = 3 self.failUnlessEqual(True, amazons3.warnMidnite) def testConstructor_005(self): """ Test assignment of s3Bucket attribute, None value. """ amazons3 = AmazonS3Config(s3Bucket="bucket") self.failUnlessEqual("bucket", amazons3.s3Bucket) amazons3.s3Bucket = None self.failUnlessEqual(None, amazons3.s3Bucket) def testConstructor_006(self): """ Test assignment of s3Bucket attribute, valid value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.s3Bucket) amazons3.s3Bucket = "bucket" self.failUnlessEqual("bucket", amazons3.s3Bucket) def testConstructor_007(self): """ Test assignment of s3Bucket attribute, invalid value (empty). """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.s3Bucket) self.failUnlessAssignRaises(ValueError, amazons3, "s3Bucket", "") self.failUnlessEqual(None, amazons3.s3Bucket) def testConstructor_008(self): """ Test assignment of encryptCommand attribute, None value. """ amazons3 = AmazonS3Config(encryptCommand="encrypt") self.failUnlessEqual("encrypt", amazons3.encryptCommand) amazons3.encryptCommand = None self.failUnlessEqual(None, amazons3.encryptCommand) def testConstructor_009(self): """ Test assignment of encryptCommand attribute, valid value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.encryptCommand) amazons3.encryptCommand = "encrypt" self.failUnlessEqual("encrypt", amazons3.encryptCommand) def testConstructor_010(self): """ Test assignment of encryptCommand attribute, invalid value (empty). """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.encryptCommand) self.failUnlessAssignRaises(ValueError, amazons3, "encryptCommand", "") self.failUnlessEqual(None, amazons3.encryptCommand) def testConstructor_011(self): """ Test assignment of fullBackupSizeLimit attribute, None value. """ amazons3 = AmazonS3Config(fullBackupSizeLimit=100) self.failUnlessEqual(100L, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = None self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) def testConstructor_012a(self): """ Test assignment of fullBackupSizeLimit attribute, valid int value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = 15 self.failUnlessEqual(15, amazons3.fullBackupSizeLimit) self.failUnlessEqual(ByteQuantity(15, UNIT_BYTES), amazons3.fullBackupSizeLimit) def testConstructor_012b(self): """ Test assignment of fullBackupSizeLimit attribute, valid long value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = 7516192768 self.failUnlessEqual(7516192768, amazons3.fullBackupSizeLimit) self.failUnlessEqual(ByteQuantity(7516192768, UNIT_BYTES), amazons3.fullBackupSizeLimit) def testConstructor_012c(self): """ Test assignment of fullBackupSizeLimit attribute, valid float value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = 7516192768.0 self.failUnlessEqual(7516192768.0, amazons3.fullBackupSizeLimit) self.failUnlessEqual(ByteQuantity(7516192768.0, UNIT_BYTES), amazons3.fullBackupSizeLimit) def testConstructor_012d(self): """ Test assignment of fullBackupSizeLimit attribute, valid string value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = "7516192768" self.failUnlessEqual(7516192768, amazons3.fullBackupSizeLimit) self.failUnlessEqual(ByteQuantity("7516192768", UNIT_BYTES), amazons3.fullBackupSizeLimit) def testConstructor_012e(self): """ Test assignment of fullBackupSizeLimit attribute, valid byte quantity value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = ByteQuantity(2.5, UNIT_GBYTES) self.failUnlessEqual(ByteQuantity(2.5, UNIT_GBYTES), amazons3.fullBackupSizeLimit) self.failUnlessEqual(2684354560.0, amazons3.fullBackupSizeLimit.bytes) def testConstructor_012f(self): """ Test assignment of fullBackupSizeLimit attribute, valid byte quantity value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) amazons3.fullBackupSizeLimit = ByteQuantity(600, UNIT_MBYTES) self.failUnlessEqual(ByteQuantity(600, UNIT_MBYTES), amazons3.fullBackupSizeLimit) self.failUnlessEqual(629145600.0, amazons3.fullBackupSizeLimit.bytes) def testConstructor_013(self): """ Test assignment of fullBackupSizeLimit attribute, invalid value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) self.failUnlessAssignRaises(ValueError, amazons3, "fullBackupSizeLimit", "xxx") self.failUnlessEqual(None, amazons3.fullBackupSizeLimit) def testConstructor_014(self): """ Test assignment of incrementalBackupSizeLimit attribute, None value. """ amazons3 = AmazonS3Config(incrementalBackupSizeLimit=100) self.failUnlessEqual(100, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = None self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) def testConstructor_015a(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid int value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = 15 self.failUnlessEqual(15, amazons3.incrementalBackupSizeLimit) self.failUnlessEqual(ByteQuantity(15, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_015b(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid long value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = 7516192768 self.failUnlessEqual(7516192768, amazons3.incrementalBackupSizeLimit) self.failUnlessEqual(ByteQuantity(7516192768, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_015c(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid float value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = 7516192768.0 self.failUnlessEqual(7516192768.0, amazons3.incrementalBackupSizeLimit) self.failUnlessEqual(ByteQuantity(7516192768.0, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_015d(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid string value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = "7516192768" self.failUnlessEqual(7516192768, amazons3.incrementalBackupSizeLimit) self.failUnlessEqual(ByteQuantity("7516192768", UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_015e(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid byte quantity value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = ByteQuantity(2.5, UNIT_GBYTES) self.failUnlessEqual(ByteQuantity(2.5, UNIT_GBYTES), amazons3.incrementalBackupSizeLimit) self.failUnlessEqual(2684354560.0, amazons3.incrementalBackupSizeLimit.bytes) def testConstructor_015f(self): """ Test assignment of incrementalBackupSizeLimit attribute, valid byte quantity value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) amazons3.incrementalBackupSizeLimit = ByteQuantity(600, UNIT_MBYTES) self.failUnlessEqual(ByteQuantity(600, UNIT_MBYTES), amazons3.incrementalBackupSizeLimit) self.failUnlessEqual(629145600.0, amazons3.incrementalBackupSizeLimit.bytes) def testConstructor_016(self): """ Test assignment of incrementalBackupSizeLimit attribute, invalid value. """ amazons3 = AmazonS3Config() self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) self.failUnlessAssignRaises(ValueError, amazons3, "incrementalBackupSizeLimit", "xxx") self.failUnlessEqual(None, amazons3.incrementalBackupSizeLimit) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config() self.failUnlessEqual(amazons31, amazons32) self.failUnless(amazons31 == amazons32) self.failUnless(not amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(amazons31 >= amazons32) self.failUnless(not amazons31 != amazons32) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ amazons31 = AmazonS3Config(True, "bucket", "encrypt", 1, 2) amazons32 = AmazonS3Config(True, "bucket", "encrypt", 1, 2) self.failUnlessEqual(amazons31, amazons32) self.failUnless(amazons31 == amazons32) self.failUnless(not amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(amazons31 >= amazons32) self.failUnless(not amazons31 != amazons32) def testComparison_003(self): """ Test comparison of two differing objects, warnMidnite differs. """ amazons31 = AmazonS3Config(warnMidnite=False) amazons32 = AmazonS3Config(warnMidnite=True) self.failIfEqual(amazons31, amazons32) self.failUnless(not amazons31 == amazons32) self.failUnless(amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(not amazons31 >= amazons32) self.failUnless(amazons31 != amazons32) def testComparison_004(self): """ Test comparison of two differing objects, s3Bucket differs (one None). """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config(s3Bucket="bucket") self.failIfEqual(amazons31, amazons32) self.failUnless(not amazons31 == amazons32) self.failUnless(amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(not amazons31 >= amazons32) self.failUnless(amazons31 != amazons32) def testComparison_005(self): """ Test comparison of two differing objects, s3Bucket differs. """ amazons31 = AmazonS3Config(s3Bucket="bucket1") amazons32 = AmazonS3Config(s3Bucket="bucket2") self.failIfEqual(amazons31, amazons32) self.failUnless(not amazons31 == amazons32) self.failUnless(amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(not amazons31 >= amazons32) self.failUnless(amazons31 != amazons32) def testComparison_006(self): """ Test comparison of two differing objects, encryptCommand differs (one None). """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config(encryptCommand="encrypt") self.failIfEqual(amazons31, amazons32) self.failUnless(not amazons31 == amazons32) self.failUnless(amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(not amazons31 >= amazons32) self.failUnless(amazons31 != amazons32) def testComparison_007(self): """ Test comparison of two differing objects, encryptCommand differs. """ amazons31 = AmazonS3Config(encryptCommand="encrypt1") amazons32 = AmazonS3Config(encryptCommand="encrypt2") self.failIfEqual(amazons31, amazons32) self.failUnless(not amazons31 == amazons32) self.failUnless(amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(not amazons31 >= amazons32) self.failUnless(amazons31 != amazons32) def testComparison_008(self): """ Test comparison of two differing objects, fullBackupSizeLimit differs (one None). """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config(fullBackupSizeLimit=1L) self.failIfEqual(amazons31, amazons32) self.failUnless(not amazons31 == amazons32) self.failUnless(amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(not amazons31 >= amazons32) self.failUnless(amazons31 != amazons32) def testComparison_009(self): """ Test comparison of two differing objects, fullBackupSizeLimit differs. """ amazons31 = AmazonS3Config(fullBackupSizeLimit=1L) amazons32 = AmazonS3Config(fullBackupSizeLimit=2L) self.failIfEqual(amazons31, amazons32) self.failUnless(not amazons31 == amazons32) self.failUnless(amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(not amazons31 >= amazons32) self.failUnless(amazons31 != amazons32) def testComparison_010(self): """ Test comparison of two differing objects, incrementalBackupSizeLimit differs (one None). """ amazons31 = AmazonS3Config() amazons32 = AmazonS3Config(incrementalBackupSizeLimit=1L) self.failIfEqual(amazons31, amazons32) self.failUnless(not amazons31 == amazons32) self.failUnless(amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(not amazons31 >= amazons32) self.failUnless(amazons31 != amazons32) def testComparison_011(self): """ Test comparison of two differing objects, incrementalBackupSizeLimit differs. """ amazons31 = AmazonS3Config(incrementalBackupSizeLimit=1L) amazons32 = AmazonS3Config(incrementalBackupSizeLimit=2L) self.failIfEqual(amazons31, amazons32) self.failUnless(not amazons31 == amazons32) self.failUnless(amazons31 < amazons32) self.failUnless(amazons31 <= amazons32) self.failUnless(not amazons31 > amazons32) self.failUnless(not amazons31 >= amazons32) self.failUnless(amazons31 != amazons32) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the amazons3 configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.amazons3) def testConstructor_002a(self): """ Test constructor with all values filled in, with valid values (integers). """ amazons3 = AmazonS3Config(True, "bucket", "encrypt", 1, 2) self.failUnlessEqual(True, amazons3.warnMidnite) self.failUnlessEqual("bucket", amazons3.s3Bucket) self.failUnlessEqual("encrypt", amazons3.encryptCommand) self.failUnlessEqual(1, amazons3.fullBackupSizeLimit) self.failUnlessEqual(2, amazons3.incrementalBackupSizeLimit) self.failUnlessEqual(ByteQuantity(1, UNIT_BYTES), amazons3.fullBackupSizeLimit) self.failUnlessEqual(ByteQuantity(2, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_002b(self): """ Test constructor with all values filled in, with valid values (byte quantities). """ amazons3 = AmazonS3Config(True, "bucket", "encrypt", ByteQuantity(1, UNIT_BYTES), ByteQuantity(2, UNIT_BYTES)) self.failUnlessEqual(True, amazons3.warnMidnite) self.failUnlessEqual("bucket", amazons3.s3Bucket) self.failUnlessEqual("encrypt", amazons3.encryptCommand) self.failUnlessEqual(ByteQuantity(1, UNIT_BYTES), amazons3.fullBackupSizeLimit) self.failUnlessEqual(ByteQuantity(2, UNIT_BYTES), amazons3.incrementalBackupSizeLimit) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["amazons3.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of amazons3 attribute, None value. """ config = LocalConfig() config.amazons3 = None self.failUnlessEqual(None, config.amazons3) def testConstructor_005(self): """ Test assignment of amazons3 attribute, valid value. """ config = LocalConfig() config.amazons3 = AmazonS3Config() self.failUnlessEqual(AmazonS3Config(), config.amazons3) def testConstructor_006(self): """ Test assignment of amazons3 attribute, invalid value (not AmazonS3Config). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "amazons3", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.amazons3 = AmazonS3Config() config2 = LocalConfig() config2.amazons3 = AmazonS3Config() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, amazons3 differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.amazons3 = AmazonS3Config() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, s3Bucket differs. """ config1 = LocalConfig() config1.amazons3 = AmazonS3Config(True, "bucket1", "encrypt", 1, 2) config2 = LocalConfig() config2.amazons3 = AmazonS3Config(True, "bucket2", "encrypt", 1, 2) self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None amazons3 section. """ config = LocalConfig() config.amazons3 = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty amazons3 section. """ config = LocalConfig() config.amazons3 = AmazonS3Config() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty amazons3 section with no values filled in. """ config = LocalConfig() config.amazons3 = AmazonS3Config(None) self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty amazons3 section with valid values filled in. """ config = LocalConfig() config.amazons3 = AmazonS3Config(True, "bucket") config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["amazons3.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.amazons3) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.amazons3) def testParse_002(self): """ Parse config document with filled-in values. """ path = self.resources["amazons3.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.amazons3) self.failUnlessEqual(True, config.amazons3.warnMidnite) self.failUnlessEqual("mybucket", config.amazons3.s3Bucket) self.failUnlessEqual("encrypt", config.amazons3.encryptCommand) self.failUnlessEqual(5368709120L, config.amazons3.fullBackupSizeLimit) self.failUnlessEqual(2147483648, config.amazons3.incrementalBackupSizeLimit) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.amazons3) self.failUnlessEqual(True, config.amazons3.warnMidnite) self.failUnlessEqual("mybucket", config.amazons3.s3Bucket) self.failUnlessEqual("encrypt", config.amazons3.encryptCommand) self.failUnlessEqual(5368709120L, config.amazons3.fullBackupSizeLimit) self.failUnlessEqual(2147483648, config.amazons3.incrementalBackupSizeLimit) def testParse_003(self): """ Parse config document with filled-in values. """ path = self.resources["amazons3.conf.3"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.amazons3) self.failUnlessEqual(True, config.amazons3.warnMidnite) self.failUnlessEqual("mybucket", config.amazons3.s3Bucket) self.failUnlessEqual("encrypt", config.amazons3.encryptCommand) self.failUnlessEqual(ByteQuantity(2.5, UNIT_GBYTES), config.amazons3.fullBackupSizeLimit) self.failUnlessEqual(ByteQuantity(600, UNIT_MBYTES), config.amazons3.incrementalBackupSizeLimit) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.amazons3) self.failUnlessEqual(True, config.amazons3.warnMidnite) self.failUnlessEqual("mybucket", config.amazons3.s3Bucket) self.failUnlessEqual("encrypt", config.amazons3.encryptCommand) self.failUnlessEqual(ByteQuantity(2.5, UNIT_GBYTES), config.amazons3.fullBackupSizeLimit) self.failUnlessEqual(ByteQuantity(600, UNIT_MBYTES), config.amazons3.incrementalBackupSizeLimit) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ amazons3 = AmazonS3Config() config = LocalConfig() config.amazons3 = amazons3 self.validateAddConfig(config) def testAddConfig_002(self): """ Test with values set. """ amazons3 = AmazonS3Config(True, "bucket", "encrypt", 1, 2) config = LocalConfig() config.amazons3 = amazons3 self.validateAddConfig(config) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestAmazonS3Config, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/customizetests.py0000664000175000017500000002005612560016766023064 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests customization functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/customize.py. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import unittest from CedarBackup2.customize import PLATFORM, customizeOverrides from CedarBackup2.config import Config, OptionsConfig, CommandOverride ####################################################################### # Test Case Classes ####################################################################### ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ############################ # Test customizeOverrides() ############################ def testCustomizeOverrides_001(self): """ Test platform=standard, no existing overrides. """ config = Config() options = OptionsConfig() if PLATFORM == "standard": config.options = options customizeOverrides(config) self.failUnlessEqual(None, options.overrides) config.options = options customizeOverrides(config, platform="standard") self.failUnlessEqual(None, options.overrides) def testCustomizeOverrides_002(self): """ Test platform=standard, existing override for cdrecord. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), ] if PLATFORM == "standard": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), ], options.overrides) config.options = options customizeOverrides(config, platform="standard") self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), ], options.overrides) def testCustomizeOverrides_003(self): """ Test platform=standard, existing override for mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("mkisofs", "/blech"), ] if PLATFORM == "standard": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("mkisofs", "/blech"), ], options.overrides) config.options = options customizeOverrides(config, platform="standard") self.failUnlessEqual([ CommandOverride("mkisofs", "/blech"), ], options.overrides) def testCustomizeOverrides_004(self): """ Test platform=standard, existing override for cdrecord and mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ] if PLATFORM == "standard": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) config.options = options customizeOverrides(config, platform="standard") self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) def testCustomizeOverrides_005(self): """ Test platform=debian, no existing overrides. """ config = Config() options = OptionsConfig() if PLATFORM == "debian": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) def testCustomizeOverrides_006(self): """ Test platform=debian, existing override for cdrecord. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), ] if PLATFORM == "debian": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/usr/bin/genisoimage"), ], options.overrides) def testCustomizeOverrides_007(self): """ Test platform=debian, existing override for mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("mkisofs", "/blech"), ] if PLATFORM == "debian": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/blech"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.failUnlessEqual([ CommandOverride("cdrecord", "/usr/bin/wodim"), CommandOverride("mkisofs", "/blech"), ], options.overrides) def testCustomizeOverrides_008(self): """ Test platform=debian, existing override for cdrecord and mkisofs. """ config = Config() options = OptionsConfig() options.overrides = [ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ] if PLATFORM == "debian": config.options = options customizeOverrides(config) self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) config.options = options customizeOverrides(config, platform="debian") self.failUnlessEqual([ CommandOverride("cdrecord", "/blech"), CommandOverride("mkisofs", "/blech2"), ], options.overrides) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFunctions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/filesystemtests.py0000664000175000017500000640766312642026452023243 0ustar pronovicpronovic00000000000000# -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2008,2010,2015 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests filesystem-related classes. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/filesystem.py. Test Notes ========== This module contains individual tests for each of the classes implemented in filesystem.py: FilesystemList, BackupFileList and PurgeItemList. The BackupFileList and PurgeItemList classes inherit from FilesystemList, and the FilesystemList class itself inherits from the standard Python list class. For the most part, I won't spend time testing inherited functionality, especially if it's already been tested. However, I do test some of the base list functionality just to ensure that the inheritence has been constructed properly and everything seems to work as expected. You may look at this code and ask, "Why all of the checks that XXX is in list YYY? Why not just compare what we got to a known list?" The answer is that the order of the list is not significant, only its contents. We can't be positive about the order in which we recurse a directory, but we do need to make sure that everything we expect is in the list and nothing more. We do this by checking the count if items and then making sure that exactly that many known items exist in the list. This file is ridiculously long, almost too long to be worked with easily. I really should split it up into smaller files, but I like having a 1:1 relationship between a module and its test. Windows Platform ================ Unfortunately, some of the expected results for these tests vary on the Windows platform. First, Windows does not support soft links. So, most of the tests around excluding and adding soft links don't really make any sense. Those checks are not executed on the Windows platform. Second, the tar files that are used to generate directory trees on disk are not extracted exactly the same on Windows as on other platforms. Again, the differences are around soft links. On Windows, the Python tar module doesn't extract soft links to directories at all, and soft links to files are extracted as real files containing the content of the link target. This means that the expected directory listings differ, and so do the total sizes of the extracted directories. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality. Instead, I create lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_023}. Each method then has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge the extent of a problem when one exists. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a FILESYSTEMTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## import sys import os import unittest import tempfile import tarfile from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar, changeFileAge, randomFilename from CedarBackup2.testutil import platformMacOsX, platformWindows from CedarBackup2.testutil import platformSupportsLinks, platformRequiresBinaryRead from CedarBackup2.testutil import failUnlessAssignRaises from CedarBackup2.filesystem import FilesystemList, BackupFileList, PurgeItemList, normalizeDir, compareContents ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data" ] RESOURCES = [ "tree1.tar.gz", "tree2.tar.gz", "tree3.tar.gz", "tree4.tar.gz", "tree5.tar.gz", "tree6.tar.gz", "tree7.tar.gz", "tree8.tar.gz", "tree9.tar.gz", "tree10.tar.gz", "tree11.tar.gz", "tree12.tar.gz", "tree13.tar.gz", "tree22.tar.gz", ] INVALID_FILE = "bogus" # This file name should never exist NOMATCH_PATH = "/something" # This path should never match something we put in a file list NOMATCH_BASENAME = "something" # This basename should never match something we put in a file list NOMATCH_PATTERN = "pattern" # This pattern should never match something we put in a file list AGE_1_HOUR = 1*60*60 # in seconds AGE_2_HOURS = 2*60*60 # in seconds AGE_12_HOURS = 12*60*60 # in seconds AGE_23_HOURS = 23*60*60 # in seconds AGE_24_HOURS = 24*60*60 # in seconds AGE_25_HOURS = 25*60*60 # in seconds AGE_47_HOURS = 47*60*60 # in seconds AGE_48_HOURS = 48*60*60 # in seconds AGE_49_HOURS = 49*60*60 # in seconds ####################################################################### # Test Case Classes ####################################################################### ########################### # TestFilesystemList class ########################### class TestFilesystemList(unittest.TestCase): """Tests for the FilesystemList class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def pathPattern(self, path): """Returns properly-escaped regular expression pattern matching the indicated path.""" return ".*%s.*" % path.replace("\\", "\\\\") def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test attribute assignment ############################ #pylint: disable=R0204 def testAssignment_001(self): """ Test assignment of excludeFiles attribute, true values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeFiles) fsList.excludeFiles = True self.failUnlessEqual(True, fsList.excludeFiles) fsList.excludeFiles = [ 1, ] self.failUnlessEqual(True, fsList.excludeFiles) #pylint: disable=R0204 def testAssignment_002(self): """ Test assignment of excludeFiles attribute, false values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeFiles) fsList.excludeFiles = False self.failUnlessEqual(False, fsList.excludeFiles) fsList.excludeFiles = [ ] self.failUnlessEqual(False, fsList.excludeFiles) #pylint: disable=R0204 def testAssignment_003(self): """ Test assignment of excludeLinks attribute, true values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeLinks) fsList.excludeLinks = True self.failUnlessEqual(True, fsList.excludeLinks) fsList.excludeLinks = [ 1, ] self.failUnlessEqual(True, fsList.excludeLinks) #pylint: disable=R0204 def testAssignment_004(self): """ Test assignment of excludeLinks attribute, false values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeLinks) fsList.excludeLinks = False self.failUnlessEqual(False, fsList.excludeLinks) fsList.excludeLinks = [ ] self.failUnlessEqual(False, fsList.excludeLinks) #pylint: disable=R0204 def testAssignment_005(self): """ Test assignment of excludeDirs attribute, true values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeDirs) fsList.excludeDirs = True self.failUnlessEqual(True, fsList.excludeDirs) fsList.excludeDirs = [ 1, ] self.failUnlessEqual(True, fsList.excludeDirs) #pylint: disable=R0204 def testAssignment_006(self): """ Test assignment of excludeDirs attribute, false values. """ fsList = FilesystemList() self.failUnlessEqual(False, fsList.excludeDirs) fsList.excludeDirs = False self.failUnlessEqual(False, fsList.excludeDirs) fsList.excludeDirs = [ ] self.failUnlessEqual(False, fsList.excludeDirs) def testAssignment_007(self): """ Test assignment of ignoreFile attribute. """ fsList = FilesystemList() self.failUnlessEqual(None, fsList.ignoreFile) fsList.ignoreFile = "ken" self.failUnlessEqual("ken", fsList.ignoreFile) fsList.ignoreFile = None self.failUnlessEqual(None, fsList.ignoreFile) def testAssignment_008(self): """ Test assignment of excludePaths attribute. """ fsList = FilesystemList() self.failUnlessEqual([], fsList.excludePaths) fsList.excludePaths = None self.failUnlessEqual([], fsList.excludePaths) fsList.excludePaths = [ "/path/to/something/absolute", ] self.failUnlessEqual([ "/path/to/something/absolute", ], fsList.excludePaths) fsList.excludePaths = [ "/path/to/something/absolute", "/path/to/something/else", ] self.failUnlessEqual([ "/path/to/something/absolute", "/path/to/something/else", ], fsList.excludePaths) self.failUnlessAssignRaises(ValueError, fsList, "excludePaths", ["path/to/something/relative", ]) self.failUnlessAssignRaises(ValueError, fsList, "excludePaths", [ "/path/to/something/absolute", "path/to/something/relative", ]) fsList.excludePaths = [ "/path/to/something/absolute", ] self.failUnlessEqual([ "/path/to/something/absolute", ], fsList.excludePaths) fsList.excludePaths.insert(0, "/ken") self.failUnlessEqual([ "/ken", "/path/to/something/absolute", ], fsList.excludePaths) fsList.excludePaths.append("/file") self.failUnlessEqual([ "/ken", "/path/to/something/absolute", "/file", ], fsList.excludePaths) fsList.excludePaths.extend(["/one", "/two", ]) self.failUnlessEqual([ "/ken", "/path/to/something/absolute", "/file", "/one", "/two", ], fsList.excludePaths) fsList.excludePaths = [ "/path/to/something/absolute", ] self.failUnlessRaises(ValueError, fsList.excludePaths.insert, 0, "path/to/something/relative") self.failUnlessRaises(ValueError, fsList.excludePaths.append, "path/to/something/relative") self.failUnlessRaises(ValueError, fsList.excludePaths.extend, ["path/to/something/relative", ]) def testAssignment_009(self): """ Test assignment of excludePatterns attribute. """ fsList = FilesystemList() self.failUnlessEqual([], fsList.excludePatterns) fsList.excludePatterns = None self.failUnlessEqual([], fsList.excludePatterns) fsList.excludePatterns = [ r".*\.jpg", ] self.failUnlessEqual([ r".*\.jpg", ], fsList.excludePatterns) fsList.excludePatterns = [ r".*\.jpg", "[a-zA-Z0-9]*", ] self.failUnlessEqual([ r".*\.jpg", "[a-zA-Z0-9]*", ], fsList.excludePatterns) self.failUnlessAssignRaises(ValueError, fsList, "excludePatterns", [ "*.jpg", ]) self.failUnlessAssignRaises(ValueError, fsList, "excludePatterns", [ "*.jpg", "[a-zA-Z0-9]*", ]) fsList.excludePatterns = [ r".*\.jpg", ] self.failUnlessEqual([ r".*\.jpg", ], fsList.excludePatterns) fsList.excludePatterns.insert(0, "ken") self.failUnlessEqual([ "ken", r".*\.jpg", ], fsList.excludePatterns) fsList.excludePatterns.append("pattern") self.failUnlessEqual([ "ken", r".*\.jpg", "pattern", ], fsList.excludePatterns) fsList.excludePatterns.extend(["one", "two", ]) self.failUnlessEqual([ "ken", r".*\.jpg", "pattern", "one", "two", ], fsList.excludePatterns) fsList.excludePatterns = [ r".*\.jpg", ] self.failUnlessRaises(ValueError, fsList.excludePatterns.insert, 0, "*.jpg") self.failUnlessEqual([ r".*\.jpg", ], fsList.excludePatterns) self.failUnlessRaises(ValueError, fsList.excludePatterns.append, "*.jpg") self.failUnlessEqual([ r".*\.jpg", ], fsList.excludePatterns) self.failUnlessRaises(ValueError, fsList.excludePatterns.extend, ["*.jpg", ]) self.failUnlessEqual([ r".*\.jpg", ], fsList.excludePatterns) def testAssignment_010(self): """ Test assignment of excludeBasenamePatterns attribute. """ fsList = FilesystemList() self.failUnlessEqual([], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = None self.failUnlessEqual([], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = [ r".*\.jpg", ] self.failUnlessEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = [ r".*\.jpg", "[a-zA-Z0-9]*", ] self.failUnlessEqual([ r".*\.jpg", "[a-zA-Z0-9]*", ], fsList.excludeBasenamePatterns) self.failUnlessAssignRaises(ValueError, fsList, "excludeBasenamePatterns", [ "*.jpg", ]) self.failUnlessAssignRaises(ValueError, fsList, "excludeBasenamePatterns", [ "*.jpg", "[a-zA-Z0-9]*", ]) fsList.excludeBasenamePatterns = [ r".*\.jpg", ] self.failUnlessEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns.insert(0, "ken") self.failUnlessEqual([ "ken", r".*\.jpg", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns.append("pattern") self.failUnlessEqual([ "ken", r".*\.jpg", "pattern", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns.extend(["one", "two", ]) self.failUnlessEqual([ "ken", r".*\.jpg", "pattern", "one", "two", ], fsList.excludeBasenamePatterns) fsList.excludeBasenamePatterns = [ r".*\.jpg", ] self.failUnlessRaises(ValueError, fsList.excludeBasenamePatterns.insert, 0, "*.jpg") self.failUnlessEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) self.failUnlessRaises(ValueError, fsList.excludeBasenamePatterns.append, "*.jpg") self.failUnlessEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) self.failUnlessRaises(ValueError, fsList.excludeBasenamePatterns.extend, ["*.jpg", ]) self.failUnlessEqual([ r".*\.jpg", ], fsList.excludeBasenamePatterns) ################################ # Test basic list functionality ################################ def testBasic_001(self): """ Test the append() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') self.failUnlessEqual(['a'], fsList) fsList.append('b') self.failUnlessEqual(['a', 'b'], fsList) def testBasic_002(self): """ Test the insert() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.insert(0, 'a') self.failUnlessEqual(['a'], fsList) fsList.insert(0, 'b') self.failUnlessEqual(['b', 'a'], fsList) def testBasic_003(self): """ Test the remove() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.insert(0, 'a') fsList.insert(0, 'b') self.failUnlessEqual(['b', 'a'], fsList) fsList.remove('a') self.failUnlessEqual(['b'], fsList) fsList.remove('b') self.failUnlessEqual([], fsList) def testBasic_004(self): """ Test the pop() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) self.failUnlessEqual('e', fsList.pop()) self.failUnlessEqual(['a', 'b', 'c', 'd'], fsList) self.failUnlessEqual('d', fsList.pop()) self.failUnlessEqual(['a', 'b', 'c'], fsList) self.failUnlessEqual('c', fsList.pop()) self.failUnlessEqual(['a', 'b'], fsList) self.failUnlessEqual('b', fsList.pop()) self.failUnlessEqual(['a'], fsList) self.failUnlessEqual('a', fsList.pop()) self.failUnlessEqual([], fsList) def testBasic_005(self): """ Test the count() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) self.failUnlessEqual(1, fsList.count('a')) def testBasic_006(self): """ Test the index() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) self.failUnlessEqual(2, fsList.index('c')) def testBasic_007(self): """ Test the reverse() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('a') fsList.append('b') fsList.append('c') fsList.append('d') fsList.append('e') self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) fsList.reverse() self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList) fsList.reverse() self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) def testBasic_008(self): """ Test the sort() method. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('e') fsList.append('d') fsList.append('c') fsList.append('b') fsList.append('a') self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList) fsList.sort() self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) fsList.sort() self.failUnlessEqual(['a', 'b', 'c', 'd', 'e'], fsList) def testBasic_009(self): """ Test slicing. """ fsList = FilesystemList() self.failUnlessEqual([], fsList) fsList.append('e') fsList.append('d') fsList.append('c') fsList.append('b') fsList.append('a') self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList) self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList[:]) self.failUnlessEqual(['e', 'd', 'c', 'b', 'a'], fsList[0:]) self.failUnlessEqual('e', fsList[0]) self.failUnlessEqual('a', fsList[4]) self.failUnlessEqual(['d', 'c', 'b'], fsList[1:4]) ################# # Test addFile() ################# def testAddFile_001(self): """ Attempt to add a file that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_002(self): """ Attempt to add a directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_003(self): """ Attempt to add a soft link; no exclusions. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_004(self): """ Attempt to add an existing file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_005(self): """ Attempt to add a file that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_006(self): """ Attempt to add a directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_007(self): """ Attempt to add a soft link; excludeFiles set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_008(self): """ Attempt to add an existing file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddFile_009(self): """ Attempt to add a file that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_010(self): """ Attempt to add a directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_011(self): """ Attempt to add a soft link; excludeDirs set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_012(self): """ Attempt to add an existing file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_013(self): """ Attempt to add a file that doesn't exist; excludeLinks set. """ if platformSupportsLinks(): path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_014(self): """ Attempt to add a directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_015(self): """ Attempt to add a soft link; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_016(self): """ Attempt to add an existing file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_017(self): """ Attempt to add a file that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_018(self): """ Attempt to add a directory; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_019(self): """ Attempt to add a soft link; with excludePaths including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_020(self): """ Attempt to add an existing file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddFile_021(self): """ Attempt to add a file that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_022(self): """ Attempt to add a directory; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_023(self): """ Attempt to add a soft link; with excludePaths not including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_024(self): """ Attempt to add an existing file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_025(self): """ Attempt to add a file that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_026(self): """ Attempt to add a directory; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_027(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_028(self): """ Attempt to add an existing file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddFile_029(self): """ Attempt to add a file that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_030(self): """ Attempt to add a directory; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_031(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_032(self): """ Attempt to add an existing file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_033(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_034(self): """ Attempt to add a file that has spaces in its name. """ self.extractTar("tree11") path = self.buildPath(["tree11", "file with spaces"]) fsList = FilesystemList() count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_035(self): """ Attempt to add a UTF-8 file. """ self.extractTar("tree12") path = self.buildPath(["tree12", "unicode", "\xe2\x99\xaa\xe2\x99\xac"]) fsList = FilesystemList() count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddFile_036(self): """ Attempt to add a file that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ INVALID_FILE ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_037(self): """ Attempt to add a directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_038(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_039(self): """ Attempt to add an existing file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", ] count = fsList.addFile(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddFile_040(self): """ Attempt to add a file that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_041(self): """ Attempt to add a directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_042(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePaths = [ NOMATCH_BASENAME ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePaths = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addFile, path) def testAddFile_043(self): """ Attempt to add an existing file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addFile(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) ################ # Test addDir() ################ def testAddDir_001(self): """ Attempt to add a directory that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_002(self): """ Attempt to add a file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_003(self): """ Attempt to add a soft link; no exclusions. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_004(self): """ Attempt to add an existing directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_005(self): """ Attempt to add a directory that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_006(self): """ Attempt to add a file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_007(self): """ Attempt to add a soft link; excludeFiles set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_008(self): """ Attempt to add an existing directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_009(self): """ Attempt to add a directory that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_010(self): """ Attempt to add a file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_011(self): """ Attempt to add a soft link; excludeDirs set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_012(self): """ Attempt to add an existing directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_013(self): """ Attempt to add a directory that doesn't exist; excludeLinks set. """ if platformSupportsLinks(): path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_014(self): """ Attempt to add a file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_015(self): """ Attempt to add a soft link; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_016(self): """ Attempt to add an existing directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_017(self): """ Attempt to add a directory that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_018(self): """ Attempt to add a file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_019(self): """ Attempt to add a soft link; with excludePaths including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_020(self): """ Attempt to add an existing directory; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_021(self): """ Attempt to add a directory that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_022(self): """ Attempt to add a file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_023(self): """ Attempt to add a soft link; with excludePaths not including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_024(self): """ Attempt to add an existing directory; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_025(self): """ Attempt to add a directory that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_026(self): """ Attempt to add a file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_027(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_028(self): """ Attempt to add an existing directory; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_029(self): """ Attempt to add a directory that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_030(self): """ Attempt to add a file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_031(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_032(self): """ Attempt to add an existing directory; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_033(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_034(self): """ Attempt to add a directory that has spaces in its name. """ self.extractTar("tree11") path = self.buildPath(["tree11", "dir with spaces"]) fsList = FilesystemList() count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_035(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ INVALID_FILE ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_036(self): """ Attempt to add a file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_037(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_038(self): """ Attempt to add an existing directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDir(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDir_039(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_040(self): """ Attempt to add a file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDir, path) def testAddDir_041(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDir, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDir_042(self): """ Attempt to add an existing directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) ######################## # Test addDirContents() ######################## def testAddDirContents_001(self): """ Attempt to add a directory that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_002(self): """ Attempt to add a file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_003(self): """ Attempt to add a soft link; no exclusions. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() count = fsList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_004(self): """ Attempt to add an empty directory containing ignore file; no exclusions. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_005(self): """ Attempt to add an empty directory; no exclusions. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_006(self): """ Attempt to add an non-empty directory containing ignore file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_007(self): """ Attempt to add an non-empty directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_008(self): """ Attempt to add a directory that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_009(self): """ Attempt to add a file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_010(self): """ Attempt to add a soft link; excludeFiles set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_011(self): """ Attempt to add an empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_012(self): """ Attempt to add an empty directory; excludeFiles set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_013(self): """ Attempt to add an non-empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_014(self): """ Attempt to add an non-empty directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) self.failUnlessEqual(5, count) self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) def testAddDirContents_015(self): """ Attempt to add a directory that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_016(self): """ Attempt to add a file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_017(self): """ Attempt to add a soft link; excludeDirs set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_018(self): """ Attempt to add an empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_019(self): """ Attempt to add an empty directory; excludeDirs set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_020(self): """ Attempt to add an non-empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_021(self): """ Attempt to add an non-empty directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) self.failUnlessEqual(3, count) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_023(self): """ Attempt to add a directory that doesn't exist; excludeLinks set. """ if platformSupportsLinks(): path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_024(self): """ Attempt to add a file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_025(self): """ Attempt to add a soft link; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_026(self): """ Attempt to add an empty directory containing ignore file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_027(self): """ Attempt to add an empty directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnless(self.buildPath(["tree8", "dir001", ]) in fsList) def testAddDirContents_028(self): """ Attempt to add an non-empty directory containing ignore file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_029(self): """ Attempt to add an non-empty directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) def testAddDirContents_030(self): """ Attempt to add a directory that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_031(self): """ Attempt to add a file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_032(self): """ Attempt to add a soft link; with excludePaths including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_033(self): """ Attempt to add an empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_034(self): """ Attempt to add an empty directory; with excludePaths including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_035(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_036(self): """ Attempt to add an non-empty directory; with excludePaths including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_037(self): """ Attempt to add a directory that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_038(self): """ Attempt to add a file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_039(self): """ Attempt to add a soft link; with excludePaths not including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_040(self): """ Attempt to add an empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_041(self): """ Attempt to add an empty directory; with excludePaths not including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_042(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_043(self): """ Attempt to add an non-empty directory; with excludePaths not including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePaths = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_044(self): """ Attempt to add a directory that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_045(self): """ Attempt to add a file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_046(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_047(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_048(self): """ Attempt to add an empty directory; with excludePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_049(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_050(self): """ Attempt to add an non-empty directory; with excludePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_051(self): """ Attempt to add a directory that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_052(self): """ Attempt to add a file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_053(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_054(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_055(self): """ Attempt to add an empty directory; with excludePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_056(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_057(self): """ Attempt to add an non-empty directory; with excludePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludePatterns = [ NOMATCH_PATH ] count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_058(self): """ Attempt to add a large tree with no exclusions. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(122, count) self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(136, count) self.failUnlessEqual(136, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_059(self): """ Attempt to add a large tree, with excludeFiles set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(28, count) self.failUnlessEqual(28, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) else: self.failUnlessEqual(42, count) self.failUnlessEqual(42, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_060(self): """ Attempt to add a large tree, with excludeDirs set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(94, count) self.failUnlessEqual(94, len(fsList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(94, count) self.failUnlessEqual(94, len(fsList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) def testAddDirContents_061(self): """ Attempt to add a large tree, with excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path) self.failUnlessEqual(96, count) self.failUnlessEqual(96, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) def testAddDirContents_062(self): """ Attempt to add a large tree, with excludePaths set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludePaths = [ self.buildPath([ "tree6", "dir001", "dir002", ]), self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file002", ]), ] count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(112, count) self.failUnlessEqual(112, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(125, count) self.failUnlessEqual(125, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_063(self): """ Attempt to add a large tree, with excludePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() if platformWindows(): fsList.excludePatterns = [ ".*file001.*", r".*tree6\\dir002\\dir001.*" ] else: fsList.excludePatterns = [ ".*file001.*", r".*tree6\/dir002\/dir001.*" ] count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(95, count) self.failUnlessEqual(95, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(108, count) self.failUnlessEqual(108, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_064(self): """ Attempt to add a large tree, with ignoreFile set to exclude some directories. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(70, count) self.failUnlessEqual(70, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(79, count) self.failUnlessEqual(79, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_065(self): """ Attempt to add a link to a file. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "dir002", "link003", ]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_066(self): """ Attempt to add a link to a directory (which should add its contents). """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "link002" ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(9, count) self.failUnlessEqual(9, len(fsList)) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link004", ]) in fsList) def testAddDirContents_067(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_068(self): """ Attempt to add directory containing an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(3, count) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath([ "tree10", ]) in fsList) self.failUnless(self.buildPath([ "tree10", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree10", "dir002", ]) in fsList) def testAddDirContents_069(self): """ Attempt to add a directory containing items with spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testAddDirContents_070(self): """ Attempt to add a directory which has a name containing spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", "dir with spaces", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(5, count) self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testAddDirContents_071(self): """ Attempt to add a directory which has a UTF-8 filename in it. """ self.extractTar("tree12") path = self.buildPath(["tree12", "unicode", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(6, count) self.failUnlessEqual(6, len(fsList)) self.failUnless(self.buildPath([ "tree12", "unicode", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "README.strange-name", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.long.gz", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.cp437.gz", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.short.gz", ]) in fsList) self.failUnless(self.buildPath([ "tree12", "unicode", "\xe2\x99\xaa\xe2\x99\xac", ]) in fsList) def testAddDirContents_072(self): """ Attempt to add a directory which has several UTF-8 filenames in it. This test data was taken from Rick Lowe's problems around the release of v1.10. I don't run the test for Darwin (Mac OS X) and Windows because the tarball isn't valid on those platforms. """ if not platformMacOsX() and not platformWindows(): self.extractTar("tree13") path = self.buildPath(["tree13", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree13", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "Les mouvements de r\x82forme.doc", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l'\x82nonc\x82.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l\x82onard - renvois et bibliographie.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l\x82onard copie finale.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l\x82onard de vinci - page titre.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "l\x82onard de vinci.sxw", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "Rammstein - B\x81ck Dich.mp3", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "megaherz - Glas Und Tr\x84nen.mp3", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "Megaherz - Mistst\x81ck.MP3", ]) in fsList) self.failUnless(self.buildPath([ "tree13", "Rammstein - Mutter - B\x94se.mp3", ]) in fsList) def testAddDirContents_073(self): """ Attempt to add a large tree with recursive=False. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, recursive=False) if not platformSupportsLinks(): self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_074(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ INVALID_FILE ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_075(self): """ Attempt to add a file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_076(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_077(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_078(self): """ Attempt to add an empty directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_079(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ "dir008", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_080(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "dir001", ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_081(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_082(self): """ Attempt to add a file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) def testAddDirContents_083(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, fsList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_084(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_085(self): """ Attempt to add an empty directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_086(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) fsList = FilesystemList() fsList.ignoreFile = "ignore" fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_087(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree5", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in fsList) def testAddDirContents_088(self): """ Attempt to add a large tree, with excludeBasenamePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "file001", "dir001" ] count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(55, count) self.failUnlessEqual(55, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(64, count) self.failUnlessEqual(64, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_089(self): """ Attempt to add a large tree with no exclusions, addSelf=True. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, addSelf=True) if not platformSupportsLinks(): self.failUnlessEqual(122, count) self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(136, count) self.failUnlessEqual(136, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_090(self): """ Attempt to add a large tree with no exclusions, addSelf=False. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, addSelf=False) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(fsList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(135, count) self.failUnlessEqual(135, len(fsList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in fsList) def testAddDirContents_091(self): """ Attempt to add a directory with linkDepth=1. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=1) if not platformSupportsLinks(): self.failUnlessEqual(122, count) self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(165, count) self.failUnlessEqual(165, len(fsList)) self.failUnless(self.buildPath([ "tree6", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) def testAddDirContents_092(self): """ Attempt to add a directory with linkDepth=2. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=2) if not platformSupportsLinks(): self.failUnlessEqual(122, count) self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath([ "tree6" ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) else: self.failUnlessEqual(241, count) self.failUnlessEqual(241, len(fsList)) self.failUnless(self.buildPath([ "tree6" ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "ignore", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in fsList) def testAddDirContents_093(self): """ Attempt to add a directory with linkDepth=0, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=0, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(12, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) def testAddDirContents_094(self): """ Attempt to add a directory with linkDepth=1, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=1, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", ]) in fsList) def testAddDirContents_095(self): """ Attempt to add a directory with linkDepth=2, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=2, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(20, count) self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in fsList) def testAddDirContents_096(self): """ Attempt to add a directory with linkDepth=3, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=3, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(20, count) self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in fsList) def testAddDirContents_097(self): """ Attempt to add a directory with linkDepth=0, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=0, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(12, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) def testAddDirContents_098(self): """ Attempt to add a directory with linkDepth=1, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=1, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(20, count) self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005" ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in fsList) def testAddDirContents_099(self): """ Attempt to add a directory with linkDepth=2, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=2, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(32, count) self.failUnlessEqual(32, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file009", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "link002", ]) in fsList) def testAddDirContents_100(self): """ Attempt to add a directory with linkDepth=3, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) fsList = FilesystemList() count = fsList.addDirContents(path, linkDepth=3, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(35, count) self.failUnlessEqual(35, len(fsList)) self.failUnless(self.buildPath(["tree22", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir002", "file009", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir004", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir006", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir007", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir007", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree22", "dir008", "file001", ]) in fsList) def testAddDirContents_101(self): """ Attempt to add a soft link; excludeFiles and dereference set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeFiles = True self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludeFiles = True count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(1, count) self.failUnlessEqual([path], fsList) def testAddDirContents_102(self): """ Attempt to add a soft link; excludeDirs and dereference set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeDirs = True self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludeDirs = True count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_103(self): """ Attempt to add a soft link; excludeLinks and dereference set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeLinks = True self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludeLinks = True count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_104(self): """ Attempt to add a soft link; with excludePaths including the path, with dereference=True. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePaths = [ path ] self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludePaths = [ path ] count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_105(self): """ Attempt to add a soft link; with excludePatterns matching the path, with dereference=True. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to dir003 fsList = FilesystemList() fsList.excludePatterns = [ self.pathPattern(path) ] count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) def testAddDirContents_106(self): """ Attempt to add a link to a file, with dereference=True. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "dir002", "link003", ]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) def testAddDirContents_107(self): """ Attempt to add a link to a directory (which should add its contents), with dereference=True. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "link002" ]) fsList = FilesystemList() count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "dir001", ]) in fsList) # duplicated self.failUnless(self.buildPath([ "tree9", "link002", "dir002", ]) in fsList) # duplicated self.failUnless(self.buildPath([ "tree9", "link002", "file001", ]) in fsList) # duplicated self.failUnless(self.buildPath([ "tree9", "link002", "file002", ]) in fsList) # duplicated self.failUnless(self.buildPath([ "tree9", "link002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", "link004", ]) in fsList) def testAddDirContents_108(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist), and dereference=True. """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) fsList = FilesystemList() self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) def testAddDirContents_109(self): """ Attempt to add directory containing an invalid link (i.e. a link that points to something that doesn't exist), and dereference=True. """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10"]) fsList = FilesystemList() count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(3, count) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath([ "tree10", ]) in fsList) self.failUnless(self.buildPath([ "tree10", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree10", "dir002", ]) in fsList) def testAddDirContents_110(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path, and dereference=True. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, fsList.addDirContents, path, True, True, 1, True) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir fsList = FilesystemList() fsList.excludeBasenamePatterns = [ "link001", ] count = fsList.addDirContents(path, True, True, 1, True) self.failUnlessEqual(0, count) self.failUnlessEqual([], fsList) ##################### # Test removeFiles() ##################### def testRemoveFiles_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() count = fsList.removeFiles(pattern=None) self.failUnlessEqual(0, count) def testRemoveFiles_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeFiles(pattern="pattern") self.failUnlessEqual(0, count) self.failUnlessRaises(ValueError, fsList.removeFiles, pattern="*.jpg") def testRemoveFiles_003(self): """ Test with a non-empty list (files only) and a pattern of None. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(7, count) self.failUnlessEqual(1, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) def testRemoveFiles_004(self): """ Test with a non-empty list (directories only) and a pattern of None. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_005(self): """ Test with a non-empty list (files and directories) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(44, count) self.failUnlessEqual(37, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_006(self): """ Test with a non-empty list (files, directories and links) and a pattern of None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(10, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=None) self.failUnlessEqual(44, count) self.failUnlessEqual(38, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_008(self): """ Test with a non-empty list (spaces in path names) and a pattern of None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveFiles_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of the files. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveFiles_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of the files. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of the files. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of the files. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveFiles_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of the files. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*tree1.*file00[67]") self.failUnlessEqual(2, count) self.failUnlessEqual(6, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) def testRemoveFiles_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of the files. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=".*tree2.*") self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*tree4.*dir006.*") self.failUnlessEqual(10, count) self.failUnlessEqual(71, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_018(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of the files. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeFiles(pattern=".*tree9.*dir002.*") self.failUnlessEqual(4, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=".*tree9.*dir002.*") self.failUnlessEqual(4, count) self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_019(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*dir001.*file002.*") self.failUnlessEqual(1, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveFiles_020(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of the files. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeFiles(pattern=".*with spaces.*") self.failUnlessEqual(6, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=".*with spaces.*") self.failUnlessEqual(6, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) def testRemoveFiles_021(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches anything. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(7, count) self.failUnlessEqual(1, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) def testRemoveFiles_022(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches anything. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveFiles_023(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches anything. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(44, count) self.failUnlessEqual(37, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_024(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of the files. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(10, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(10, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveFiles_025(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of the files. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(44, count) self.failUnlessEqual(38, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) def testRemoveFiles_026(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of the files. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(11, count) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeFiles(pattern=".*") self.failUnlessEqual(11, count) self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) #################### # Test removeDirs() #################### def testRemoveDirs_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() count = fsList.removeDirs(pattern=None) self.failUnlessEqual(0, count) def testRemoveDirs_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeDirs(pattern="pattern") self.failUnlessEqual(0, count) self.failUnlessRaises(ValueError, fsList.removeDirs, pattern="*.jpg") def testRemoveDirs_003(self): """ Test with a non-empty list (files only) and a pattern of None. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(1, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_004(self): """ Test with a non-empty list (directories only) and a pattern of None. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(11, count) self.failUnlessEqual(0, len(fsList)) def testRemoveDirs_005(self): """ Test with a non-empty list (files and directories) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(37, count) self.failUnlessEqual(44, len(fsList)) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_006(self): """ Test with a non-empty list (files, directories and links) and a pattern of None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(7, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(12, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveDirs_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a pattern of None. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(37, count) self.failUnlessEqual(45, len(fsList)) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_008(self): """ Test with a non-empty list (spaces in path names) and a pattern of None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(3, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=None) self.failUnlessEqual(5, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveDirs_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveDirs_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveDirs_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveDirs_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*tree1.file00[67]") self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=".*dir0[012]0") self.failUnlessEqual(1, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) def testRemoveDirs_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*dir001") self.failUnlessEqual(9, count) self.failUnlessEqual(72, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_018(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeDirs(pattern=".*tree9.*dir002.*") self.failUnlessEqual(4, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=".*tree9.*dir002.*") self.failUnlessEqual(6, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveDirs_019(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*dir001") self.failUnlessEqual(9, count) self.failUnlessEqual(73, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_020(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeDirs(pattern=".*with spaces.*") self.failUnlessEqual(1, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=".*with spaces.*") self.failUnlessEqual(1, count) self.failUnlessEqual(15, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveDirs_021(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches all of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(1, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveDirs_022(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches all of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(11, count) self.failUnlessEqual(0, len(fsList)) def testRemoveDirs_023(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(37, count) self.failUnlessEqual(44, len(fsList)) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_024(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(7, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(12, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveDirs_025(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(37, count) self.failUnlessEqual(45, len(fsList)) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveDirs_026(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(3, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeDirs(pattern=".*") self.failUnlessEqual(5, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ##################### # Test removeLinks() ##################### def testRemoveLinks_001(self): """ Test with an empty list and a pattern of None. """ if platformSupportsLinks(): fsList = FilesystemList() count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) def testRemoveLinks_002(self): """ Test with an empty list and a non-empty pattern. """ if platformSupportsLinks(): fsList = FilesystemList() count = fsList.removeLinks(pattern="pattern") self.failUnlessEqual(0, count) self.failUnlessRaises(ValueError, fsList.removeLinks, pattern="*.jpg") def testRemoveLinks_003(self): """ Test with a non-empty list (files only) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_004(self): """ Test with a non-empty list (directories only) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_005(self): """ Test with a non-empty list (files and directories) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_006(self): """ Test with a non-empty list (files, directories and links) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(9, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveLinks_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_008(self): """ Test with a non-empty list (spaces in path names) and a pattern of None. """ if platformSupportsLinks(): self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=None) self.failUnlessEqual(6, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) def testRemoveLinks_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveLinks_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of them. """ if platformSupportsLinks(): self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveLinks_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*tree1.*file007") self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=".*tree2.*") self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*tree4.*dir006.*") self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_018(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=".*tree9.*dir002.*") self.failUnlessEqual(4, count) self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveLinks_019(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*tree4.*dir006.*") self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_020(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of them. """ if platformSupportsLinks(): self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=".*with spaces.*") self.failUnlessEqual(3, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) def testRemoveLinks_021(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveLinks_022(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveLinks_023(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_024(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(9, count) self.failUnlessEqual(13, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) def testRemoveLinks_025(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveLinks_026(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of them. """ if platformSupportsLinks(): self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeLinks(pattern=".*") self.failUnlessEqual(6, count) self.failUnlessEqual(10, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) ##################### # Test removeMatch() ##################### def testRemoveMatch_001(self): """ Test with an empty list and a pattern of None. """ fsList = FilesystemList() self.failUnlessRaises(TypeError, fsList.removeMatch, pattern=None) def testRemoveMatch_002(self): """ Test with an empty list and a non-empty pattern. """ fsList = FilesystemList() count = fsList.removeMatch(pattern="pattern") self.failUnlessEqual(0, count) self.failUnlessRaises(ValueError, fsList.removeMatch, pattern="*.jpg") def testRemoveMatch_003(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches none of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveMatch_004(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches none of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveMatch_005(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_006(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches none of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveMatch_007(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches none of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_008(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches none of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeMatch(pattern=NOMATCH_PATTERN) self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) def testRemoveMatch_009(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches some of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*file00[135].*") self.failUnlessEqual(3, count) self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveMatch_010(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches some of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeMatch(pattern=".*dir00[2468].*") self.failUnlessEqual(4, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveMatch_011(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*tree4.*dir006") self.failUnlessEqual(18, count) self.failUnlessEqual(63, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_012(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches some of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeMatch(pattern=".*file001.*") self.failUnlessEqual(3, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeMatch(pattern=".*file001.*") self.failUnlessEqual(3, count) self.failUnlessEqual(19, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveMatch_013(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches some of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*dir00[46].*") self.failUnlessEqual(25, count) self.failUnlessEqual(57, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveMatch_014(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches some of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeMatch(pattern=".*with spaces.*") self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeMatch(pattern=".*with spaces.*") self.failUnlessEqual(7, count) self.failUnlessEqual(9, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) def testRemoveMatch_015(self): """ Test with a non-empty list (files only) and a non-empty pattern that matches all of them. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(8, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_016(self): """ Test with a non-empty list (directories only) and a non-empty pattern that matches all of them. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(11, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_017(self): """ Test with a non-empty list (files and directories) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(81, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_019(self): """ Test with a non-empty list (files, directories and links) and a non-empty pattern that matches all of them. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(17, count) self.failUnlessEqual(0, len(fsList)) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(22, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_020(self): """ Test with a non-empty list (files and directories, some nonexistent) and a non-empty pattern that matches all of them. """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) fsList.append(self.buildPath([ "tree4", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(82, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(82, count) self.failUnlessEqual(0, len(fsList)) def testRemoveMatch_021(self): """ Test with a non-empty list (spaces in path names) and a non-empty pattern that matches all of them. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(14, count) self.failUnlessEqual(0, len(fsList)) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeMatch(pattern=".*") self.failUnlessEqual(16, count) self.failUnlessEqual(0, len(fsList)) ####################### # Test removeInvalid() ####################### def testRemoveInvalid_001(self): """ Test with an empty list. """ fsList = FilesystemList() count = fsList.removeInvalid() self.failUnlessEqual(0, count) def testRemoveInvalid_002(self): """ Test with a non-empty list containing only invalid entries (some with spaces). """ self.extractTar("tree9") fsList = FilesystemList() fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", " %s 5 " % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(5, len(fsList)) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", " %s 5 " % INVALID_FILE, ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(5, count) self.failUnlessEqual(0, len(fsList)) def testRemoveInvalid_003(self): """ Test with a non-empty list containing only valid entries (files only). """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testRemoveInvalid_004(self): """ Test with a non-empty list containing only valid entries (directories only). """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testRemoveInvalid_005(self): """ Test with a non-empty list containing only valid entries (files and directories). """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testRemoveInvalid_006(self): """ Test with a non-empty list containing only valid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveInvalid_007(self): """ Test with a non-empty list containing valid and invalid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(21, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(4, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(26, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(4, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testRemoveInvalid_008(self): """ Test with a non-empty list containing only valid entries (files, directories and links, some with spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) count = fsList.removeInvalid() self.failUnlessEqual(0, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ################### # Test normalize() ################### def testNormalize_001(self): """ Test with an empty list. """ fsList = FilesystemList() self.failUnlessEqual(0, len(fsList)) fsList.normalize() self.failUnlessEqual(0, len(fsList)) def testNormalize_002(self): """ Test with a list containing one entry. """ fsList = FilesystemList() fsList.append("one") self.failUnlessEqual(1, len(fsList)) fsList.normalize() self.failUnlessEqual(1, len(fsList)) self.failUnless("one" in fsList) def testNormalize_003(self): """ Test with a list containing two entries, no duplicates. """ fsList = FilesystemList() fsList.append("one") fsList.append("two") self.failUnlessEqual(2, len(fsList)) fsList.normalize() self.failUnlessEqual(2, len(fsList)) self.failUnless("one" in fsList) self.failUnless("two" in fsList) def testNormalize_004(self): """ Test with a list containing two entries, with duplicates. """ fsList = FilesystemList() fsList.append("one") fsList.append("one") self.failUnlessEqual(2, len(fsList)) fsList.normalize() self.failUnlessEqual(1, len(fsList)) self.failUnless("one" in fsList) def testNormalize_005(self): """ Test with a list containing many entries, no duplicates. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) fsList.normalize() self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) fsList.normalize() self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testNormalize_006(self): """ Test with a list containing many entries, with duplicates. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) count = fsList.addDirContents(path) self.failUnlessEqual(17, count) self.failUnlessEqual(34, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) fsList.normalize() self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(44, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) fsList.normalize() self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) ################ # Test verify() ################ def testVerify_001(self): """ Test with an empty list. """ fsList = FilesystemList() ok = fsList.verify() self.failUnlessEqual(True, ok) def testVerify_002(self): """ Test with a non-empty list containing only invalid entries. """ self.extractTar("tree9") fsList = FilesystemList() fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(4, len(fsList)) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(4, len(fsList)) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) def testVerify_003(self): """ Test with a non-empty list containing only valid entries (files only). """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testVerify_004(self): """ Test with a non-empty list containing only valid entries (directories only). """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) def testVerify_005(self): """ Test with a non-empty list containing only valid entries (files and directories). """ self.extractTar("tree4") path = self.buildPath(["tree4"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(81, count) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(81, len(fsList)) self.failUnless(self.buildPath([ "tree4", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir002", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir003", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file007", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file008", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file009", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "dir006", "file010", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree4", "file007", ]) in fsList) def testVerify_006(self): """ Test with a non-empty list containing only valid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(True, ok) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testVerify_007(self): """ Test with a non-empty list containing valid and invalid entries (files, directories and links). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(21, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(21, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) fsList.append(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(26, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(26, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testVerify_008(self): """ Test with a non-empty list containing valid and invalid entries (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ])) # file won't exist on disk fsList.append(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ok = fsList.verify() self.failUnlessEqual(False, ok) self.failUnlessEqual(20, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-1" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-2" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-3" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "%s-4" % INVALID_FILE, ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ########################### # TestBackupFileList class ########################### class TestBackupFileList(unittest.TestCase): """Tests for the BackupFileList class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def tarPath(self, components): """Builds a complete search path from a list of components, compatible with Python tar output.""" if platformWindows(): return self.buildPath(components)[3:].replace("\\", "/") else: result = self.buildPath(components) if result[0:1] == os.path.sep: return result[1:] return result def buildRandomPath(self, maxlength, extension): """Builds a complete, randomly-named search path.""" maxlength -= len(self.tmpdir) maxlength -= len(extension) components = [ self.tmpdir, randomFilename(maxlength, suffix=extension), ] return buildPath(components) ################ # Test addDir() ################ def testAddDir_001(self): """ Test that function is overridden, no exclusions. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(1, count) self.failUnlessEqual([dirPath], backupList) def testAddDir_002(self): """ Test that function is overridden, excludeFiles set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludeFiles = True dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(1, count) self.failUnlessEqual([dirPath], backupList) def testAddDir_003(self): """ Test that function is overridden, excludeDirs set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludeDirs = True dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) def testAddDir_004(self): """ Test that function is overridden, excludeLinks set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ if platformSupportsLinks(): self.extractTar("tree5") backupList = BackupFileList() backupList.excludeLinks = True dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) def testAddDir_005(self): """ Test that function is overridden, excludePaths set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludePaths = [ NOMATCH_PATH ] dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(1, count) self.failUnlessEqual([dirPath], backupList) def testAddDir_006(self): """ Test that function is overridden, excludePatterns set. Since this function calls the superclass by definition, we can skimp a bit on validation and only ensure that it seems to be overridden properly. """ self.extractTar("tree5") backupList = BackupFileList() backupList.excludePatterns = [ NOMATCH_PATH ] dirPath = self.buildPath(["tree5", "dir001"]) count = backupList.addDir(dirPath) self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) if platformSupportsLinks(): dirPath = self.buildPath(["tree5", "dir002", "link001", ]) count = backupList.addDir(dirPath) self.failUnlessEqual(1, count) self.failUnlessEqual([dirPath], backupList) ################### # Test totalSize() ################### def testTotalSize_001(self): """ Test on an empty list. """ backupList = BackupFileList() size = backupList.totalSize() self.failUnlessEqual(0, size) def testTotalSize_002(self): """ Test on a non-empty list containing only valid entries. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1835, size) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1116, size) def testTotalSize_004(self): """ Test on a non-empty list (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1705, size) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1085, size) def testTotalSize_005(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1835, size) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1116, size) def testTotalSize_006(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1835, size) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) size = backupList.totalSize() self.failUnlessEqual(1116, size) ######################### # Test generateSizeMap() ######################### def testGenerateSizeMap_001(self): """ Test on an empty list. """ backupList = BackupFileList() sizeMap = backupList.generateSizeMap() self.failUnlessEqual(0, len(sizeMap)) def testGenerateSizeMap_002(self): """ Test on a non-empty list containing only valid entries. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(10, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(15, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link003", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link002", ]) ]) def testGenerateSizeMap_004(self): """ Test on a non-empty list (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(11, len(sizeMap)) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link002", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "file with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "link with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "link001", ])]) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(13, len(sizeMap)) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "file with spaces", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "link001", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "link002", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "link003", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "link with spaces", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link002", ])]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree11", "dir with spaces", "link with spaces", ])]) def testGenerateSizeMap_005(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(10, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(15, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link003", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link002", ]) ]) def testGenerateSizeMap_006(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(10, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) sizeMap = backupList.generateSizeMap() self.failUnlessEqual(15, len(sizeMap)) self.failUnlessEqual(243, sizeMap[self.buildPath([ "tree9", "dir001", "file001", ]) ]) self.failUnlessEqual(268, sizeMap[self.buildPath([ "tree9", "dir001", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir001", "link003", ]) ]) self.failUnlessEqual(134, sizeMap[self.buildPath([ "tree9", "dir002", "file001", ]) ]) self.failUnlessEqual(74, sizeMap[self.buildPath([ "tree9", "dir002", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link003", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "dir002", "link004", ]) ]) self.failUnlessEqual(155, sizeMap[self.buildPath([ "tree9", "file001", ]) ]) self.failUnlessEqual(242, sizeMap[self.buildPath([ "tree9", "file002", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link001", ]) ]) self.failUnlessEqual(0, sizeMap[self.buildPath([ "tree9", "link002", ]) ]) ########################### # Test generateDigestMap() ########################### def testGenerateDigestMap_001(self): """ Test on an empty list. """ backupList = BackupFileList() digestMap = backupList.generateDigestMap() self.failUnlessEqual(0, len(digestMap)) def testGenerateDigestMap_002(self): """ Test on a non-empty list containing only valid entries. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) def testGenerateDigestMap_003(self): """ Test on a non-empty list containing only valid entries (some containing spaces). """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(11, len(digestMap)) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "link with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "link002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "link with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "link001", ])]) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(7, len(digestMap)) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir002", "file003", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])]) def testGenerateDigestMap_004(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) def testGenerateDigestMap_005(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) digestMap = backupList.generateDigestMap() self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[self.buildPath([ "tree9", "file002", ])]) def testGenerateDigestMap_006(self): """ Test on an empty list, passing stripPrefix not None. """ backupList = BackupFileList() prefix = "whatever" digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(0, len(digestMap)) def testGenerateDigestMap_007(self): """ Test on a non-empty list containing only valid entries, passing stripPrefix not None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "\\", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "/", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "/", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "/", "file002", ])]) def testGenerateDigestMap_008(self): """ Test on a non-empty list containing only valid entries (some containing spaces), passing stripPrefix not None. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree11", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(11, len(digestMap)) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir with spaces", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir with spaces", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir with spaces", "link with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir with spaces", "link002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir002", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "dir002", "file003", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "link with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "link001", ])]) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree11", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(7, len(digestMap)) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file with spaces", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir002", "file003", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir with spaces", "file001", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "dir with spaces", "file with spaces", ])]) def testGenerateDigestMap_009(self): """ Test on a non-empty list containing a directory (which shouldn't be possible), passing stripPrefix not None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "\\", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "/", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "/", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "/", "file002", ])]) def testGenerateDigestMap_010(self): """ Test on a non-empty list containing a non-existent file, passing stripPrefix not None. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(10, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "\\", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "\\", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "\\", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "\\", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "\\", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "\\", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) prefix = normalizeDir(self.buildPath(["tree9", ])) digestMap = backupList.generateDigestMap(stripPrefix=prefix) self.failUnlessEqual(6, len(digestMap)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", digestMap[buildPath([ "/", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", digestMap[buildPath([ "/", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", digestMap[buildPath([ "/", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", digestMap[buildPath([ "/", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", digestMap[buildPath([ "/", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", digestMap[buildPath([ "/", "file002", ])]) ######################## # Test generateFitted() ######################## def testGenerateFitted_001(self): """ Test on an empty list. """ backupList = BackupFileList() fittedList = backupList.generateFitted(2000) self.failUnlessEqual(0, len(fittedList)) def testGenerateFitted_002(self): """ Test on a non-empty list containing only valid entries, all of which fit. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(10, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(15, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_003(self): """ Test on a non-empty list containing only valid entries (some containing spaces), all of which fit. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(11, len(fittedList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fittedList) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(13, len(fittedList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fittedList) def testGenerateFitted_004(self): """ Test on a non-empty list containing only valid entries, some of which fit. We can get some strange behavior on Windows, which hits the "links not supported" case. The file tree9/dir002/file002 is 74 bytes, and is supposed to be the only file included because links are not recognized. However, link004 points at file002, and apparently Windows (sometimes?) sees link004 as a real file with a size of 74 bytes. Since only one of the two fits in the fitted list, we just check for one or the other. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(80) self.failUnlessEqual(1, len(fittedList)) self.failUnless((self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) or (self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList)) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(80) self.failUnlessEqual(10, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_005(self): """ Test on a non-empty list containing only valid entries, none of which fit. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(0) self.failUnlessEqual(0, len(fittedList)) fittedList = backupList.generateFitted(50) self.failUnlessEqual(0, len(fittedList)) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(0) self.failUnlessEqual(0, len(fittedList)) fittedList = backupList.generateFitted(50) self.failUnlessEqual(9, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_006(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(10, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(15, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) def testGenerateFitted_007(self): """ Test on a non-empty list containing a non-existent file. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(10, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) fittedList = backupList.generateFitted(2000) self.failUnlessEqual(15, len(fittedList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fittedList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fittedList) ###################### # Test generateSpan() ###################### def testGenerateSpan_001(self): """ Test on an empty list. """ backupList = BackupFileList() spanSet = backupList.generateSpan(2000) self.failUnlessEqual(0, len(spanSet)) def testGenerateSpan_002(self): """ Test a set of files that all fit in one span item. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if platformSupportsLinks(): self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) spanSet = backupList.generateSpan(2000) self.failUnlessEqual(1, len(spanSet)) spanItem = spanSet[0] self.failUnlessEqual(15, len(spanItem.fileList)) self.failUnlessEqual(1116, spanItem.size) self.failUnlessEqual(2000, spanItem.capacity) self.failUnlessEqual((1116.0/2000.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in spanItem.fileList) def testGenerateSpan_003(self): """ Test a set of files that all fit in two span items. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if platformSupportsLinks(): self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) spanSet = backupList.generateSpan(760, "best_fit") self.failUnlessEqual(2, len(spanSet)) spanItem = spanSet[0] self.failUnlessEqual(12, len(spanItem.fileList)) self.failUnlessEqual(753, spanItem.size) self.failUnlessEqual(760, spanItem.capacity) self.failUnlessEqual((753.0/760.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in spanItem.fileList) spanItem = spanSet[1] self.failUnlessEqual(3, len(spanItem.fileList)) self.failUnlessEqual(363, spanItem.size) self.failUnlessEqual(760, spanItem.capacity) self.failUnlessEqual((363.0/760.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in spanItem.fileList) def testGenerateSpan_004(self): """ Test a set of files that all fit in three span items. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if platformSupportsLinks(): self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) spanSet = backupList.generateSpan(515, "best_fit") self.failUnlessEqual(3, len(spanSet)) spanItem = spanSet[0] self.failUnlessEqual(11, len(spanItem.fileList)) self.failUnlessEqual(511, spanItem.size) self.failUnlessEqual(515, spanItem.capacity) self.failUnlessEqual((511.0/515.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in spanItem.fileList) spanItem = spanSet[1] self.failUnlessEqual(3, len(spanItem.fileList)) self.failUnlessEqual(471, spanItem.size) self.failUnlessEqual(515, spanItem.capacity) self.failUnlessEqual((471.0/515.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "file002", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in spanItem.fileList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in spanItem.fileList) spanItem = spanSet[2] self.failUnlessEqual(1, len(spanItem.fileList)) self.failUnlessEqual(134, spanItem.size) self.failUnlessEqual(515, spanItem.capacity) self.failUnlessEqual((134.0/515.0)*100.0, spanItem.utilization) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in spanItem.fileList) def testGenerateSpan_005(self): """ Test a set of files where one of the files does not fit in the capacity. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if platformSupportsLinks(): self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessRaises(ValueError, backupList.generateSpan, 250, "best_fit") ######################### # Test generateTarfile() ######################### def testGenerateTarfile_001(self): """ Test on an empty list. """ backupList = BackupFileList() tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(ValueError, backupList.generateTarfile, tarPath) self.failUnless(not os.path.exists(tarPath)) def testGenerateTarfile_002(self): """ Test on a non-empty list containing a directory (which shouldn't be possible). """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(11, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001/" ]) in tarList or self.tarPath([ "tree9", "dir001//" ]) in tarList # Grr... Python 2.5 behavior differs or self.tarPath([ "tree9", "dir001", ]) in tarList) # Grr... Python 2.6 behavior differs self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", "dir001", ])) # back-door around addDir() self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001" ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(16, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001/" ]) in tarList or self.tarPath([ "tree9", "dir001//" ]) in tarList # Grr... Python 2.5 behavior differs or self.tarPath([ "tree9", "dir001", ]) in tarList) # Grr... Python 2.6 behavior differs self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_003(self): """ Test on a non-empty list containing a non-existent file, ignore=False. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(tarfile.TarError, backupList.generateTarfile, tarPath, ignore=False) self.failUnless(not os.path.exists(tarPath)) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(tarfile.TarError, backupList.generateTarfile, tarPath, ignore=False) self.failUnless(not os.path.exists(tarPath)) def testGenerateTarfile_004(self): """ Test on a non-empty list containing a non-existent file, ignore=True. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath, ignore=True) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) backupList.append(self.buildPath([ "tree9", INVALID_FILE, ])) # file won't exist on disk self.failUnlessEqual(16, len(backupList)) self.failUnless(self.buildPath([ "tree9", INVALID_FILE ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath, ignore=True) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_005(self): """ Test on a non-empty list containing only valid entries, with an invalid mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(ValueError, backupList.generateTarfile, tarPath, mode="bogus") self.failUnless(not os.path.exists(tarPath)) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) self.failUnlessRaises(ValueError, backupList.generateTarfile, tarPath, mode="bogus") self.failUnless(not os.path.exists(tarPath)) def testGenerateTarfile_006(self): """ Test on a non-empty list containing only valid entries, default mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_007(self): """ Test on a non-empty list (some containing spaces), default mode. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(11, len(tarList)) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "file with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "link with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file003", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "file with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link001", ]) in tarList) else: self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(13, len(tarList)) self.failUnless(self.tarPath([ "tree11", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "file with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "link with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir002", "file003", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "file with spaces", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree11", "dir with spaces", "link with spaces", ]) in tarList) def testGenerateTarfile_008(self): """ Test on a non-empty list containing only valid entries, 'tar' mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_009(self): """ Test on a non-empty list containing only valid entries, 'targz' mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar.gz", ]) backupList.generateTarfile(tarPath, mode="targz") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar.gz", ]) backupList.generateTarfile(tarPath, mode="targz") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_010(self): """ Test on a non-empty list containing only valid entries, 'tarbz2' mode. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildPath(["file.tar.bz2", ]) backupList.generateTarfile(tarPath, mode="tarbz2") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildPath(["file.tar.bz2", ]) backupList.generateTarfile(tarPath, mode="tarbz2") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_011(self): """ Test on a non-empty list containing only valid entries, 'tar' mode, long target name. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar") backupList.generateTarfile(tarPath, mode="tar") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar") backupList.generateTarfile(tarPath, mode="tar") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_012(self): """ Test on a non-empty list containing only valid entries, 'targz' mode, long target name. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar.gz") backupList.generateTarfile(tarPath, mode="targz") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar") backupList.generateTarfile(tarPath, mode="targz") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_013(self): """ Test on a non-empty list containing only valid entries, 'tarbz2' mode, long target name. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar.bz2") backupList.generateTarfile(tarPath, mode="tarbz2") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(10, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) tarPath = self.buildRandomPath(255, ".tar") backupList.generateTarfile(tarPath, mode="tarbz2") self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(15, len(tarList)) self.failUnless(self.tarPath([ "tree9", "dir001", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir001", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link003", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "dir002", "link004", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "file002", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link001", ]) in tarList) self.failUnless(self.tarPath([ "tree9", "link002", ]) in tarList) def testGenerateTarfile_014(self): """ Test behavior of the flat flag. """ self.extractTar("tree11") backupList = BackupFileList() path = self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) backupList.addFile(path) path = self.buildPath([ "tree11", "dir with spaces", "file001", ]) backupList.addFile(path) path = self.buildPath([ "tree11", "dir002", "file002", ]) backupList.addFile(path) path = self.buildPath([ "tree11", "dir002", "file003", ]) backupList.addFile(path) self.failUnlessEqual(4, len(backupList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in backupList) tarPath = self.buildPath(["file.tar", ]) backupList.generateTarfile(tarPath, flat=True) self.failUnless(tarfile.is_tarfile(tarPath)) tarFile = tarfile.open(tarPath) tarList = tarFile.getnames() tarFile.close() self.failUnlessEqual(4, len(tarList)) self.failUnless("file with spaces" in tarList) self.failUnless("file001" in tarList) self.failUnless("file002" in tarList) self.failUnless("file003" in tarList) ######################### # Test removeUnchanged() ######################### def testRemoveUnchanged_001(self): """ Test on an empty list with an empty digest map. """ digestMap = {} backupList = BackupFileList() self.failUnlessEqual(0, len(backupList)) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) def testRemoveUnchanged_002(self): """ Test on an empty list with an non-empty digest map. """ digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() self.failUnlessEqual(0, len(backupList)) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) def testRemoveUnchanged_003(self): """ Test on an non-empty list with an empty digest map. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_004(self): """ Test with a digest map containing only entries that are not in the list. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir003", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir003", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir004", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir004", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file003", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file004", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_005(self): """ Test with a digest map containing only entries that are in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e8AAAAAAAAAAAAAAAAAA7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecAAAAAAAAAAAAAAAAAA95d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b64AAAAAAAAAAAAAAAAAA5b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1cAAAAAAAAAAAAAAAAAA5d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237aAAAAAAAAAAAAAAAAAA555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97bAAAAAAAAAAAAAAAAAAbb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_006(self): """ Test with a digest map containing only entries that are in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(6, count) self.failUnlessEqual(4, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(6, count) self.failUnlessEqual(9, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_007(self): """ Test with a digest map containing both entries that are and are not in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531cCCCCCCCCCCCCCCCCCCCCCCCCCe77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a2CCCCCCCCCCCCCCCCCCCCCCCCCd6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26CCCCCCCCCCCCCCCCCCCCCCCCC86c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014CCCCCCCCCCCCCCCCCCCCCCCCCd26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a62CCCCCCCCCCCCCCCCCCCCCCCCC73847", self.buildPath([ "tree9", "file003", ]) :"fae89085eeCCCCCCCCCCCCCCCCCCCCCCCCC769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_008(self): """ Test with a digest map containing both entries that are and are not in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(3, count) self.failUnlessEqual(7, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(3, count) self.failUnlessEqual(12, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_009(self): """ Test with a digest map containing both entries that are and are not in the list, with matching and non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531AAAAAAAAAAAAAAAAAAAAAAAe21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(2, count) self.failUnlessEqual(8, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) count = backupList.removeUnchanged(digestMap) self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(2, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) def testRemoveUnchanged_010(self): """ Test on an empty list with an empty digest map. """ digestMap = {} backupList = BackupFileList() self.failUnlessEqual(0, len(backupList)) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) self.failUnlessEqual(0, len(newDigest)) def testRemoveUnchanged_011(self): """ Test on an empty list with an non-empty digest map. """ digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() self.failUnlessEqual(0, len(backupList)) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(0, len(backupList)) self.failUnlessEqual(0, len(newDigest)) def testRemoveUnchanged_012(self): """ Test on an non-empty list with an empty digest map. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_013(self): """ Test with a digest map containing only entries that are not in the list. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir003", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir003", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir004", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir004", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file003", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file004", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_014(self): """ Test with a digest map containing only entries that are in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e8AAAAAAAAAAAAAAAAAA7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecAAAAAAAAAAAAAAAAAA95d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b64AAAAAAAAAAAAAAAAAA5b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1cAAAAAAAAAAAAAAAAAA5d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237aAAAAAAAAAAAAAAAAAA555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97bAAAAAAAAAAAAAAAAAAbb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_015(self): """ Test with a digest map containing only entries that are in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir002", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir002", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file002", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(6, count) self.failUnlessEqual(4, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(6, count) self.failUnlessEqual(9, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_016(self): """ Test with a digest map containing both entries that are and are not in the list, with non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531cCCCCCCCCCCCCCCCCCCCCCCCCCe77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a2CCCCCCCCCCCCCCCCCCCCCCCCCd6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26CCCCCCCCCCCCCCCCCCCCCCCCC86c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014CCCCCCCCCCCCCCCCCCCCCCCCCd26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a62CCCCCCCCCCCCCCCCCCCCCCCCC73847", self.buildPath([ "tree9", "file003", ]) :"fae89085eeCCCCCCCCCCCCCCCCCCCCCCCCC769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(0, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_017(self): """ Test with a digest map containing both entries that are and are not in the list, with matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531c7e897cd3df90ed76355de7e21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(3, count) self.failUnlessEqual(7, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(3, count) self.failUnlessEqual(12, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) def testRemoveUnchanged_018(self): """ Test with a digest map containing both entries that are and are not in the list, with matching and non-matching digests. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) digestMap = { self.buildPath([ "tree9", "dir001", "file001", ]):"4ff529531AAAAAAAAAAAAAAAAAAAAAAAe21e77ee", self.buildPath([ "tree9", "dir001", "file002", ]):"9d473094a22ecf2ae299c25932c941795d1d6cba", self.buildPath([ "tree9", "dir003", "file001", ]):"2f68cdda26b643ca0e53be6348ae1255b8786c4b", self.buildPath([ "tree9", "dir003", "file002", ]):"0cc03b3014d1ca7188264677cf01f015d72d26cb", self.buildPath([ "tree9", "file001", ]) :"3ef0b16a6237af9200b7a46c1987d6a555973847", self.buildPath([ "tree9", "file003", ]) :"fae89085ee97b57ccefa7e30346c573bb0a769db", } backupList = BackupFileList() count = backupList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(2, count) self.failUnlessEqual(8, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnlessEqual(10, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "link001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "link002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "link003", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "link004", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) (count, newDigest) = backupList.removeUnchanged(digestMap, captureDigest=True) # pylint: disable=W0633 self.failUnless(isinstance(backupList, BackupFileList)) # make sure we just replaced it self.failUnlessEqual(2, count) self.failUnlessEqual(13, len(backupList)) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in backupList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in backupList) self.failUnlessEqual(6, len(newDigest)) self.failUnlessEqual("4ff529531c7e897cd3df90ed76355de7e21e77ee", newDigest[self.buildPath([ "tree9", "dir001", "file001", ])]) self.failUnlessEqual("9d473094a22ecf2ae299c25932c941795d1d6cba", newDigest[self.buildPath([ "tree9", "dir001", "file002", ])]) self.failUnlessEqual("2f68cdda26b643ca0e53be6348ae1255b8786c4b", newDigest[self.buildPath([ "tree9", "dir002", "file001", ])]) self.failUnlessEqual("0cc03b3014d1ca7188264677cf01f015d72d26cb", newDigest[self.buildPath([ "tree9", "dir002", "file002", ])]) self.failUnlessEqual("3ef0b16a6237af9200b7a46c1987d6a555973847", newDigest[self.buildPath([ "tree9", "file001", ])]) self.failUnlessEqual("fae89085ee97b57ccefa7e30346c573bb0a769db", newDigest[self.buildPath([ "tree9", "file002", ])]) ######################### # Test _generateDigest() ######################### # pylint: disable=E1101 def testGenerateDigest_001(self): """ Test that _generateDigest gives back same result as the slower simplistic implementation for a set of files (just using all of the resource files). """ for key in self.resources.keys(): path = self.resources[key] if platformRequiresBinaryRead(): try: import hashlib digest1 = hashlib.sha1(open(path, mode="rb").read()).hexdigest() except ImportError: import sha digest1 = sha.new(open(path, mode="rb").read()).hexdigest() else: try: import hashlib digest1 = hashlib.sha1(open(path).read()).hexdigest() except ImportError: import sha digest1 = sha.new(open(path).read()).hexdigest() digest2 = BackupFileList._generateDigest(path) self.failUnlessEqual(digest1, digest2, "Digest for %s varies: [%s] vs [%s]." % (path, digest1, digest2)) ########################## # TestPurgeItemList class ########################## class TestPurgeItemList(unittest.TestCase): """Tests for the PurgeItemList class.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) def pathPattern(self, path): """Returns properly-escaped regular expression pattern matching the indicated path.""" return ".*%s.*" % path.replace("\\", "\\\\") ######################## # Test addDirContents() ######################## def testAddDirContents_001(self): """ Attempt to add a directory that doesn't exist; no exclusions. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_002(self): """ Attempt to add a file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_003(self): """ Attempt to add a soft link; no exclusions. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() count = purgeList.addDir(path) self.failUnlessEqual(1, count) self.failUnlessEqual([path], purgeList) def testAddDirContents_004(self): """ Attempt to add an empty directory containing ignore file; no exclusions. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_005(self): """ Attempt to add an empty directory; no exclusions. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_006(self): """ Attempt to add an non-empty directory containing ignore file; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_007(self): """ Attempt to add an non-empty directory; no exclusions. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_008(self): """ Attempt to add a directory that doesn't exist; excludeFiles set. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeFiles = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_009(self): """ Attempt to add a file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeFiles = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_010(self): """ Attempt to add a soft link; excludeFiles set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeFiles = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_011(self): """ Attempt to add an empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_012(self): """ Attempt to add an empty directory; excludeFiles set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_013(self): """ Attempt to add an non-empty directory containing ignore file; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_014(self): """ Attempt to add an non-empty directory; excludeFiles set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) self.failUnlessEqual(4, count) self.failUnlessEqual(4, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) def testAddDirContents_015(self): """ Attempt to add a directory that doesn't exist; excludeDirs set. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeDirs = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_016(self): """ Attempt to add a file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeDirs = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_017(self): """ Attempt to add a soft link; excludeDirs set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeDirs = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_018(self): """ Attempt to add an empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_019(self): """ Attempt to add an empty directory; excludeDirs set. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_020(self): """ Attempt to add an non-empty directory containing ignore file; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_021(self): """ Attempt to add an non-empty directory; excludeDirs set. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(3, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_023(self): """ Attempt to add a directory that doesn't exist; excludeLinks set. """ if platformSupportsLinks(): path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeLinks = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_024(self): """ Attempt to add a file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeLinks = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_025(self): """ Attempt to add a soft link; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeLinks = True self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_026(self): """ Attempt to add an empty directory containing ignore file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_027(self): """ Attempt to add an empty directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_028(self): """ Attempt to add an non-empty directory containing ignore file; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_029(self): """ Attempt to add an non-empty directory; excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(6, count) self.failUnlessEqual(6, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) def testAddDirContents_030(self): """ Attempt to add a directory that doesn't exist; with excludePaths including the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_031(self): """ Attempt to add a file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_032(self): """ Attempt to add a soft link; with excludePaths including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePaths = [ path ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_033(self): """ Attempt to add an empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_034(self): """ Attempt to add an empty directory; with excludePaths including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_035(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_036(self): """ Attempt to add an non-empty directory; with excludePaths including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ path ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_037(self): """ Attempt to add a directory that doesn't exist; with excludePaths not including the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_038(self): """ Attempt to add a file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_039(self): """ Attempt to add a soft link; with excludePaths not including the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_040(self): """ Attempt to add an empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_041(self): """ Attempt to add an empty directory; with excludePaths not including the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_042(self): """ Attempt to add an non-empty directory containing ignore file; with excludePaths not including the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_043(self): """ Attempt to add an non-empty directory; with excludePaths not including the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePaths = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_044(self): """ Attempt to add a directory that doesn't exist; with excludePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_045(self): """ Attempt to add a file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_046(self): """ Attempt to add a soft link; with excludePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_047(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_048(self): """ Attempt to add an empty directory; with excludePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_049(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_050(self): """ Attempt to add an non-empty directory; with excludePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ self.pathPattern(path) ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_051(self): """ Attempt to add a directory that doesn't exist; with excludePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_052(self): """ Attempt to add a file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_053(self): """ Attempt to add a soft link; with excludePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_054(self): """ Attempt to add an empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_055(self): """ Attempt to add an empty directory; with excludePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_056(self): """ Attempt to add an non-empty directory containing ignore file; with excludePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_057(self): """ Attempt to add an non-empty directory; with excludePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludePatterns = [ NOMATCH_PATH ] count = purgeList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_058(self): """ Attempt to add a large tree with no exclusions. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(135, count) self.failUnlessEqual(135, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_059(self): """ Attempt to add a large tree, with excludeFiles set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeFiles = True count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(27, count) self.failUnlessEqual(27, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) else: self.failUnlessEqual(41, count) self.failUnlessEqual(41, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_060(self): """ Attempt to add a large tree, with excludeDirs set. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeDirs = True count = purgeList.addDirContents(path) self.failUnlessEqual(94, count) self.failUnlessEqual(94, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) def testAddDirContents_061(self): """ Attempt to add a large tree, with excludeLinks set. """ if platformSupportsLinks(): self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeLinks = True count = purgeList.addDirContents(path) self.failUnlessEqual(95, count) self.failUnlessEqual(95, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) def testAddDirContents_062(self): """ Attempt to add a large tree, with excludePaths set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludePaths = [ self.buildPath([ "tree6", "dir001", "dir002", ]), self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file001", ]), self.buildPath([ "tree6", "dir003", "dir002", "file002", ]), ] count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(111, count) self.failUnlessEqual(111, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(124, count) self.failUnlessEqual(124, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_063(self): """ Attempt to add a large tree, with excludePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() if platformWindows(): purgeList.excludePatterns = [ ".*file001.*", r".*tree6\\dir002\\dir001.*" ] else: purgeList.excludePatterns = [ ".*file001.*", r".*tree6\/dir002\/dir001.*" ] count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(94, count) self.failUnlessEqual(94, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(107, count) self.failUnlessEqual(107, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_064(self): """ Attempt to add a large tree, with ignoreFile set to exclude some directories. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(69, count) self.failUnlessEqual(69, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(78, count) self.failUnlessEqual(78, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_065(self): """ Attempt to add a link to a file. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "dir002", "link003", ]) purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_066(self): """ Attempt to add a link to a directory (which should add its contents). """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9", "link002" ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "link002", "link004", ]) in purgeList) def testAddDirContents_067(self): """ Attempt to add an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10", "link001"]) purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_068(self): """ Attempt to add directory containing an invalid link (i.e. a link that points to something that doesn't exist). """ if platformSupportsLinks(): self.extractTar("tree10") path = self.buildPath(["tree10"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(2, count) self.failUnlessEqual(2, len(purgeList)) self.failUnless(self.buildPath([ "tree10", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree10", "dir002", ]) in purgeList) def testAddDirContents_069(self): """ Attempt to add a directory containing items with spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(13, count) self.failUnlessEqual(13, len(purgeList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in purgeList) else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(purgeList)) self.failUnless(self.buildPath([ "tree11", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in purgeList) def testAddDirContents_070(self): """ Attempt to add a directory which has a name containing spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11", "dir with spaces", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(4, count) self.failUnlessEqual(4, len(purgeList)) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in purgeList) def testAddDirContents_071(self): """ Attempt to add a directory which has a UTF-8 filename in it. """ self.extractTar("tree12") path = self.buildPath(["tree12", "unicode", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(5, count) self.failUnlessEqual(5, len(purgeList)) self.failUnless(self.buildPath([ "tree12", "unicode", "README.strange-name", ]) in purgeList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.long.gz", ]) in purgeList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.cp437.gz", ]) in purgeList) self.failUnless(self.buildPath([ "tree12", "unicode", "utflist.short.gz", ]) in purgeList) self.failUnless(self.buildPath([ "tree12", "unicode", "\xe2\x99\xaa\xe2\x99\xac", ]) in purgeList) def testAddDirContents_072(self): """ Attempt to add a directory which has several UTF-8 filenames in it. This test data was taken from Rick Lowe's problems around the release of v1.10. I don't run the test for Darwin (Mac OS X) because the tarball isn't valid there. """ if not (platformMacOsX() and sys.getfilesystemencoding() == "utf-8"): self.extractTar("tree13") path = self.buildPath(["tree13", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) self.failUnlessEqual(10, count) self.failUnlessEqual(10, len(purgeList)) self.failUnless(self.buildPath([ "tree13", "Les mouvements de r\x82forme.doc", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l'\x82nonc\x82.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l\x82onard - renvois et bibliographie.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l\x82onard copie finale.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l\x82onard de vinci - page titre.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "l\x82onard de vinci.sxw", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "Rammstein - B\x81ck Dich.mp3", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "megaherz - Glas Und Tr\x84nen.mp3", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "Megaherz - Mistst\x81ck.MP3", ]) in purgeList) self.failUnless(self.buildPath([ "tree13", "Rammstein - Mutter - B\x94se.mp3", ]) in purgeList) def testAddDirContents_073(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ INVALID_FILE ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_074(self): """ Attempt to add a file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "file001", ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_075(self): """ Attempt to add a soft link; with excludeBasenamePatterns matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "link001", ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "link001", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_076(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ "dir001", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_077(self): """ Attempt to add an empty directory; with excludeBasenamePatterns matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "dir001", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_078(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ "dir008", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_079(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "dir001", ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_080(self): """ Attempt to add a directory that doesn't exist; with excludeBasenamePatterns not matching the path. """ path = self.buildPath([INVALID_FILE]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_081(self): """ Attempt to add a file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "file001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) def testAddDirContents_082(self): """ Attempt to add a soft link; with excludeBasenamePatterns not matching the path. """ if platformSupportsLinks(): self.extractTar("tree5") path = self.buildPath(["tree5", "link001"]) # link to a file purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] self.failUnlessRaises(ValueError, purgeList.addDirContents, path) path = self.buildPath(["tree5", "dir002", "link001"]) # link to a dir purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_083(self): """ Attempt to add an empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree7") path = self.buildPath(["tree7", "dir001"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_084(self): """ Attempt to add an empty directory; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree8") path = self.buildPath(["tree8", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_085(self): """ Attempt to add an non-empty directory containing ignore file; with excludeBasenamePatterns not matching the path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir008"]) purgeList = PurgeItemList() purgeList.ignoreFile = "ignore" purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testAddDirContents_086(self): """ Attempt to add an non-empty directory; with excludeBasenamePatterns not matching the main directory path. """ self.extractTar("tree5") path = self.buildPath(["tree5", "dir001"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ NOMATCH_BASENAME ] count = purgeList.addDirContents(path) self.failUnlessEqual(7, count) self.failUnlessEqual(7, len(purgeList)) self.failUnless(self.buildPath(["tree5", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree5", "dir001", "link001", ]) in purgeList) def testAddDirContents_087(self): """ Attempt to add a large tree, with excludeBasenamePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "file001", "dir001", ] count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(54, count) self.failUnlessEqual(54, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(63, count) self.failUnlessEqual(63, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_088(self): """ Attempt to add a large tree, with excludeBasenamePatterns set to exclude some entries. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() purgeList.excludeBasenamePatterns = [ "file001", "dir001" ] count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(54, count) self.failUnlessEqual(54, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(63, count) self.failUnlessEqual(63, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_089(self): """ Attempt to add a large tree with no exclusions """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(135, count) self.failUnlessEqual(135, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", ]) in purgeList) def testAddDirContents_090(self): """ Attempt to add a directory with linkDepth=1. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=1) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(164, count) self.failUnlessEqual(164, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) def testAddDirContents_091(self): """ Attempt to add a directory with linkDepth=2. """ self.extractTar("tree6") path = self.buildPath(["tree6"]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=2) if not platformSupportsLinks(): self.failUnlessEqual(121, count) self.failUnlessEqual(121, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) else: self.failUnlessEqual(240, count) self.failUnlessEqual(240, len(purgeList)) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir001", "link001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir001", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "dir003", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link002", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir002", "link005", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "dir002", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link001", "link004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file008", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "file009", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "dir003", "link004", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "dir002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "dir003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file004", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file005", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file006", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "file007", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "ignore", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link002", "link001", "link003", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree6", "link001", ]) in purgeList) def testAddDirContents_092(self): """ Attempt to add a directory with linkDepth=0, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=0, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) def testAddDirContents_093(self): """ Attempt to add a directory with linkDepth=1, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=1, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(15, count) self.failUnlessEqual(15, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", ]) in purgeList) def testAddDirContents_094(self): """ Attempt to add a directory with linkDepth=2, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=2, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(19, count) self.failUnlessEqual(19, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in purgeList) def testAddDirContents_095(self): """ Attempt to add a directory with linkDepth=3, dereference=False. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=3, dereference=False) if not platformSupportsLinks(): pass else: self.failUnlessEqual(19, count) self.failUnlessEqual(19, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", "link002", "link002", ]) in purgeList) def testAddDirContents_096(self): """ Attempt to add a directory with linkDepth=0, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=0, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) def testAddDirContents_097(self): """ Attempt to add a directory with linkDepth=1, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=1, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(19, count) self.failUnlessEqual(19, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005" ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in purgeList) def testAddDirContents_098(self): """ Attempt to add a directory with linkDepth=2, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=2, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(31, count) self.failUnlessEqual(31, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file009", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "link002", ]) in purgeList) def testAddDirContents_099(self): """ Attempt to add a directory with linkDepth=3, dereference=True. """ self.extractTar("tree22") path = self.buildPath(["tree22", "dir003", ]) purgeList = PurgeItemList() count = purgeList.addDirContents(path, linkDepth=3, dereference=True) if not platformSupportsLinks(): pass else: self.failUnlessEqual(34, count) self.failUnlessEqual(34, len(purgeList)) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "dir001", "link004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir003", "link003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir001", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file005", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir002", "file009", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir004", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "file003", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir005", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "link001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir006", "link002", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir007", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir007", "file001", ]) in purgeList) self.failUnless(self.buildPath(["tree22", "dir008", "file001", ]) in purgeList) #################### # Test removeAged() #################### def testRemoveYoungFiles_001(self): """ Test on an empty list, daysOld < 0. """ daysOld = -1 purgeList = PurgeItemList() self.failUnlessRaises(ValueError, purgeList.removeYoungFiles, daysOld) def testRemoveYoungFiles_002(self): """ Test on a non-empty list, daysOld < 0. """ daysOld = -1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) self.failUnlessRaises(ValueError, purgeList.removeYoungFiles, daysOld) def testRemoveYoungFiles_003(self): """ Test on an empty list, daysOld = 0 """ daysOld = 0 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_004(self): """ Test on a non-empty list containing only directories, daysOld = 0. """ daysOld = 0 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree2", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_005(self): """ Test on a non-empty list containing only links, daysOld = 0. """ if platformSupportsLinks(): daysOld = 0 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_006(self): """ Test on a non-empty list containing only non-existent files, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_007(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_008(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_009(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_010(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_011(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_012(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_013(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_014(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_015(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_016(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 0. """ daysOld = 0 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_017(self): """ Test on an empty list, daysOld = 1 """ daysOld = 1 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_018(self): """ Test on a non-empty list containing only directories, daysOld = 1. """ daysOld = 1 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree2", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_019(self): """ Test on a non-empty list containing only links, daysOld = 1. """ if platformSupportsLinks(): daysOld = 1 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_020(self): """ Test on a non-empty list containing only non-existent files, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_021(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_022(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_023(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_024(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_025(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_026(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnlessEqual(2, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_027(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_028(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_029(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_030(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 1. """ daysOld = 1 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_031(self): """ Test on an empty list, daysOld = 2 """ daysOld = 2 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_032(self): """ Test on a non-empty list containing only directories, daysOld = 2. """ daysOld = 2 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree2", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_033(self): """ Test on a non-empty list containing only links, daysOld = 2. """ if platformSupportsLinks(): daysOld = 2 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_034(self): """ Test on a non-empty list containing only non-existent files, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_035(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_036(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_037(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_038(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_039(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_040(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_041(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_042(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_043(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_044(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 2. """ daysOld = 2 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(2, count) self.failUnless(self.buildPath([ "tree1", "file001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in purgeList) def testRemoveYoungFiles_045(self): """ Test on an empty list, daysOld = 3 """ daysOld = 3 purgeList = PurgeItemList() count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_046(self): """ Test on a non-empty list containing only directories, daysOld = 3. """ daysOld = 3 self.extractTar("tree2") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", ])) purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree2", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in purgeList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in purgeList) def testRemoveYoungFiles_047(self): """ Test on a non-empty list containing only links, daysOld = 3. """ if platformSupportsLinks(): daysOld = 3 self.extractTar("tree9") purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "link001", ])) purgeList.addFile(self.buildPath([ "tree9", "dir002", "link004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree9", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in purgeList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in purgeList) def testRemoveYoungFiles_048(self): """ Test on a non-empty list containing only non-existent files, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.append(self.buildPath([ "tree1", "stuff001", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff002", ])) # append, since it doesn't exist on disk purgeList.append(self.buildPath([ "tree1", "stuff003", ])) # append, since it doesn't exist on disk count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(0, count) self.failUnlessEqual(3, len(purgeList)) self.failUnless(self.buildPath([ "tree1", "stuff001", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff002", ]) in purgeList) self.failUnless(self.buildPath([ "tree1", "stuff003", ]) in purgeList) def testRemoveYoungFiles_049(self): """ Test on a non-empty list containing existing files "touched" to current time, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ])) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ])) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_050(self): """ Test on a non-empty list containing existing files "touched" to being 1 hour old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_1_HOUR) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_1_HOUR) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_051(self): """ Test on a non-empty list containing existing files "touched" to being 2 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_2_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_2_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_052(self): """ Test on a non-empty list containing existing files "touched" to being 12 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_12_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_12_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_053(self): """ Test on a non-empty list containing existing files "touched" to being 23 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_23_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_23_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_054(self): """ Test on a non-empty list containing existing files "touched" to being 24 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_24_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_24_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_055(self): """ Test on a non-empty list containing existing files "touched" to being 25 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_25_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_25_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_056(self): """ Test on a non-empty list containing existing files "touched" to being 47 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_47_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_47_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_057(self): """ Test on a non-empty list containing existing files "touched" to being 48 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_48_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_48_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) def testRemoveYoungFiles_058(self): """ Test on a non-empty list containing existing files "touched" to being 49 hours old, daysOld = 3. """ daysOld = 3 self.extractTar("tree1") purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) changeFileAge(self.buildPath([ "tree1", "file001", ]), AGE_49_HOURS) changeFileAge(self.buildPath([ "tree1", "file002", ])) changeFileAge(self.buildPath([ "tree1", "file003", ])) changeFileAge(self.buildPath([ "tree1", "file004", ]), AGE_49_HOURS) count = purgeList.removeYoungFiles(daysOld) self.failUnlessEqual(4, count) self.failUnlessEqual([], purgeList) #################### # Test purgeItems() #################### def testPurgeItems_001(self): """ Test with an empty list. """ purgeList = PurgeItemList() (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(0, files) self.failUnlessEqual(0, dirs) def testPurgeItems_002(self): """ Test with a list containing only non-empty directories. """ self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", ])) purgeList.addDir(self.buildPath([ "tree9", "dir001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(0, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(17, count) self.failUnlessEqual(17, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) else: self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree9", ])) purgeList.addDir(self.buildPath([ "tree9", "dir001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(0, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testPurgeItems_003(self): """ Test with a list containing only empty directories. """ self.extractTar("tree2") path = self.buildPath(["tree2"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(11, count) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath([ "tree2", "dir010", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree2", "dir001", ])) purgeList.addDir(self.buildPath([ "tree2", "dir002", ])) purgeList.addDir(self.buildPath([ "tree2", "dir003", ])) purgeList.addDir(self.buildPath([ "tree2", "dir004", ])) purgeList.addDir(self.buildPath([ "tree2", "dir005", ])) purgeList.addDir(self.buildPath([ "tree2", "dir006", ])) purgeList.addDir(self.buildPath([ "tree2", "dir007", ])) purgeList.addDir(self.buildPath([ "tree2", "dir008", ])) purgeList.addDir(self.buildPath([ "tree2", "dir009", ])) purgeList.addDir(self.buildPath([ "tree2", "dir010", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(0, files) self.failUnlessEqual(10, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual(1, len(fsList)) self.failUnless(self.buildPath([ "tree2", ]) in fsList) def testPurgeItems_004(self): """ Test with a list containing only files. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) purgeList.addFile(self.buildPath([ "tree1", "file005", ])) purgeList.addFile(self.buildPath([ "tree1", "file006", ])) purgeList.addFile(self.buildPath([ "tree1", "file007", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(7, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(1, count) self.failUnlessEqual(1, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) def testPurgeItems_005(self): """ Test with a list containing a directory and some of the files in that directory. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(4, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(4, count) self.failUnlessEqual(4, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testPurgeItems_006(self): """ Test with a list containing a directory and all of the files in that directory. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) purgeList.addFile(self.buildPath([ "tree1", "file005", ])) purgeList.addFile(self.buildPath([ "tree1", "file006", ])) purgeList.addFile(self.buildPath([ "tree1", "file007", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(7, files) self.failUnlessEqual(1, dirs) self.failUnlessRaises(ValueError, fsList.addDirContents, path) self.failUnless(not os.path.exists(path)) def testPurgeItems_007(self): """ Test with a list containing various kinds of entries, including links, files and directories. Make sure that removing a link doesn't remove the file the link points toward. """ if platformSupportsLinks(): self.extractTar("tree9") path = self.buildPath(["tree9"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(22, count) self.failUnlessEqual(22, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree9", "dir001", "link001", ])) purgeList.addDir(self.buildPath([ "tree9", "dir002", "dir001", ])) purgeList.addFile(self.buildPath([ "tree9", "file001", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(2, files) self.failUnlessEqual(1, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(18, count) self.failUnlessEqual(18, len(fsList)) self.failUnless(self.buildPath([ "tree9", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir001", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "file002", ]) in fsList) self.failUnless(os.path.islink(self.buildPath([ "tree9", "dir002", "link001", ]))) # won't be included in list, though self.failUnless(self.buildPath([ "tree9", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree9", "link002", ]) in fsList) def testPurgeItems_008(self): """ Test with a list containing non-existent entries. """ self.extractTar("tree1") path = self.buildPath(["tree1"]) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(8, count) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file004", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) purgeList = PurgeItemList() purgeList.addDir(self.buildPath([ "tree1", ])) purgeList.addFile(self.buildPath([ "tree1", "file001", ])) purgeList.addFile(self.buildPath([ "tree1", "file002", ])) purgeList.addFile(self.buildPath([ "tree1", "file003", ])) purgeList.addFile(self.buildPath([ "tree1", "file004", ])) purgeList.append(self.buildPath([ "tree1", INVALID_FILE, ])) # bypass validations (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(4, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(4, count) self.failUnlessEqual(4, len(fsList)) self.failUnless(self.buildPath([ "tree1", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file005", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file006", ]) in fsList) self.failUnless(self.buildPath([ "tree1", "file007", ]) in fsList) def testPurgeItems_009(self): """ Test with a list containing entries containing spaces. """ self.extractTar("tree11") path = self.buildPath(["tree11"]) fsList = FilesystemList() count = fsList.addDirContents(path) if not platformSupportsLinks(): self.failUnlessEqual(14, count) self.failUnlessEqual(14, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree11", "file with spaces", ])) purgeList.addFile(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(2, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(12, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) else: self.failUnlessEqual(16, count) self.failUnlessEqual(16, len(fsList)) self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) purgeList = PurgeItemList() purgeList.addFile(self.buildPath([ "tree11", "file with spaces", ])) purgeList.addFile(self.buildPath([ "tree11", "dir with spaces", "file with spaces", ])) (files, dirs) = purgeList.purgeItems() self.failUnlessEqual(2, files) self.failUnlessEqual(0, dirs) fsList = FilesystemList() count = fsList.addDirContents(path) self.failUnlessEqual(12, count) self.failUnlessEqual(12, len(fsList)) self.failUnless(self.buildPath([ "tree11", "link with spaces", ]) not in fsList) # file it points to was removed self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link002", ]) not in fsList) # file it points to was removed self.failUnless(self.buildPath([ "tree11", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "link003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "file001", ]) in fsList) self.failUnless(self.buildPath([ "tree11", "dir with spaces", "link with spaces", ]) in fsList) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the various public functions.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname, within=None): """Extracts a tarfile with a particular name.""" if within is None: extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) else: path = os.path.join(self.tmpdir, within) os.mkdir(path) extractTar(path, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ######################### # Test compareContents() ######################### def testCompareContents_001(self): """ Compare two empty directories. """ self.extractTar("tree2", within="path1") self.extractTar("tree2", within="path2") path1 = self.buildPath(["path1", "tree2", "dir001", ]) path2 = self.buildPath(["path2", "tree2", "dir002", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_002(self): """ Compare one empty and one non-empty directory containing only directories. """ self.extractTar("tree2", within="path1") self.extractTar("tree2", within="path2") path1 = self.buildPath(["path1", "tree2", "dir001", ]) path2 = self.buildPath(["path2", "tree2", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_003(self): """ Compare one empty and one non-empty directory containing only files. """ self.extractTar("tree2", within="path1") self.extractTar("tree1", within="path2") path1 = self.buildPath(["path1", "tree2", "dir001", ]) path2 = self.buildPath(["path2", "tree1", ]) self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_004(self): """ Compare two directories containing only directories, same. """ self.extractTar("tree2", within="path1") self.extractTar("tree2", within="path2") path1 = self.buildPath(["path1", "tree2", ]) path2 = self.buildPath(["path2", "tree2", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_005(self): """ Compare two directories containing only directories, different set. """ self.extractTar("tree2", within="path1") self.extractTar("tree3", within="path2") path1 = self.buildPath(["path1", "tree2", ]) path2 = self.buildPath(["path2", "tree3", ]) compareContents(path1, path2) # no error, since directories don't count compareContents(path1, path2, verbose=True) # no error, since directories don't count def testCompareContents_006(self): """ Compare two directories containing only files, same. """ self.extractTar("tree1", within="path1") self.extractTar("tree1", within="path2") path1 = self.buildPath(["path1", "tree1", ]) path2 = self.buildPath(["path2", "tree1", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_007(self): """ Compare two directories containing only files, different contents. """ self.extractTar("tree1", within="path1") self.extractTar("tree1", within="path2") path1 = self.buildPath(["path1", "tree1", ]) path2 = self.buildPath(["path2", "tree1", ]) open(self.buildPath(["path1", "tree1", "file004", ]), "a").write("BOGUS") # change content self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_008(self): """ Compare two directories containing only files, different set. """ self.extractTar("tree1", within="path1") self.extractTar("tree7", within="path2") path1 = self.buildPath(["path1", "tree1", ]) path2 = self.buildPath(["path2", "tree7", "dir001", ]) self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_009(self): """ Compare two directories containing files and directories, same. """ self.extractTar("tree9", within="path1") self.extractTar("tree9", within="path2") path1 = self.buildPath(["path1", "tree9", ]) path2 = self.buildPath(["path2", "tree9", ]) compareContents(path1, path2) compareContents(path1, path2, verbose=True) def testCompareContents_010(self): """ Compare two directories containing files and directories, different contents. """ self.extractTar("tree9", within="path1") self.extractTar("tree9", within="path2") path1 = self.buildPath(["path1", "tree9", ]) path2 = self.buildPath(["path2", "tree9", ]) open(self.buildPath(["path2", "tree9", "dir001", "file002", ]), "a").write("whoops") # change content self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) def testCompareContents_011(self): """ Compare two directories containing files and directories, different set. """ self.extractTar("tree9", within="path1") self.extractTar("tree6", within="path2") path1 = self.buildPath(["path1", "tree9", ]) path2 = self.buildPath(["path2", "tree6", ]) self.failUnlessRaises(ValueError, compareContents, path1, path2) self.failUnlessRaises(ValueError, compareContents, path1, path2, verbose=True) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestFilesystemList, 'test'), unittest.makeSuite(TestBackupFileList, 'test'), unittest.makeSuite(TestPurgeItemList, 'test'), unittest.makeSuite(TestFunctions, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/knapsacktests.py0000664000175000017500000027752212560016766022651 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2004-2005,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests knapsack functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/knapsack.py. Code Coverage ============= This module contains individual tests for each of the public functions implemented in knapsack.py: C{firstFit()}, C{bestFit()}, C{worstFit()} and C{alternateFit()}. Note that the tests for each function are pretty much identical and so there's pretty much code duplication. In production code, I would argue that this implies some refactoring is needed. In here, however, I prefer having lots of individual test cases even if there is duplication, because I think this makes it easier to judge the extent of a problem when one exists. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Full vs. Reduced Tests ====================== All of the tests in this module are considered safe to be run in an average build environment. There is a no need to use a KNAPSACKTESTS_FULL environment variable to provide a "reduced feature set" test suite as for some of the other test modules. @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # Import standard modules import unittest from CedarBackup2.knapsack import firstFit, bestFit, worstFit, alternateFit ####################################################################### # Module-wide configuration and constants ####################################################################### # These all have random letters for keys because the original data had a,b,c,d, # etc. in ascending order, which actually masked a sorting bug in the implementation. ITEMS_01 = { } ITEMS_02 = { "z" : 0, "^" : 0, "3" : 0, "(" : 0, "[" : 0, "/" : 0, "a" : 0, "r" : 0, } ITEMS_03 = { "k" : 0, "*" : 1, "u" : 10, "$" : 100, "h" : 1000, "?" : 10000, "b" : 100000, "s" : 1000000, } ITEMS_04 = { "l" : 1000000, "G" : 100000, "h" : 10000, "#" : 1000, "a" : 100, "'" : 10, "c" : 1, "t" : 0, } ITEMS_05 = { "n" : 1, "N" : 1, "z" : 1, "@" : 1, "c" : 1, "h" : 1, "d" : 1, "u" : 1, } ITEMS_06 = { "o" : 10, "b" : 10, "G" : 10, "+" : 10, "B" : 10, "O" : 10, "e" : 10, "v" : 10, } ITEMS_07 = { "$" : 100, "K" : 100, "f" : 100, "=" : 100, "n" : 100, "I" : 100, "F" : 100, "w" : 100, } ITEMS_08 = { "y" : 1000, "C" : 1000, "s" : 1000, "f" : 1000, "a" : 1000, "U" : 1000, "g" : 1000, "x" : 1000, } ITEMS_09 = { "7" : 10000, "d" : 10000, "f" : 10000, "g" : 10000, "t" : 10000, "l" : 10000, "h" : 10000, "y" : 10000, } ITEMS_10 = { "5" : 100000, "#" : 100000, "l" : 100000, "t" : 100000, "6" : 100000, "T" : 100000, "i" : 100000, "z" : 100000, } ITEMS_11 = { "t" : 1, "d" : 1, "k" : 100000, "l" : 100000, "7" : 100000, "G" : 100000, "j" : 1, "1" : 1, } ITEMS_12 = { "a" : 10, "e" : 10, "M" : 100000, "u" : 100000, "y" : 100000, "f" : 100000, "k" : 10, "2" : 10, } ITEMS_13 = { "n" : 100, "p" : 100, "b" : 100000, "i" : 100000, "$" : 100000, "/" : 100000, "l" : 100, "3" : 100, } ITEMS_14 = { "b" : 1000, ":" : 1000, "e" : 100000, "O" : 100000, "o" : 100000, "#" : 100000, "m" : 1000, "4" : 1000, } ITEMS_15 = { "c" : 1, "j" : 1, "e" : 1, "H" : 100000, "n" : 100000, "h" : 1, "N" : 1, "5" : 1, } ITEMS_16 = { "a" : 10, "M" : 10, "%" : 10, "'" : 100000, "l" : 100000, "?" : 10, "o" : 10, "6" : 10, } ITEMS_17 = { "h" : 100, "z" : 100, "(" : 100, "?" : 100000, "k" : 100000, "|" : 100, "p" : 100, "7" : 100, } ITEMS_18 = { "[" : 1000, "l" : 1000, "*" : 1000, "/" : 100000, "z" : 100000, "|" : 1000, "q" : 1000, "h" : 1000, } # This is a more realistic example, taken from tree9.tar.gz ITEMS_19 = { 'dir001/file001': 243, 'dir001/file002': 268, 'dir002/file001': 134, 'dir002/file002': 74, 'file001' : 155, 'file002' : 242, 'link001' : 0, 'link002' : 0, } ####################################################################### # Utility functions ####################################################################### def buildItemDict(origDict): """ Creates an item dictionary suitable for passing to a knapsack function. The knapsack functions take a dictionary, keyed on item, of (item, size) tuples. This function converts a simple item/size dictionary to a knapsack dictionary. It exists for convenience. @param origDict: Dictionary to convert @type origDict: Simple dictionary mapping item to size, like C{ITEMS_02} @return: Dictionary suitable for passing to a knapsack function. """ itemDict = { } for key in origDict.keys(): itemDict[key] = (key, origDict[key]) return itemDict ####################################################################### # Test Case Classes ####################################################################### ##################### # TestKnapsack class ##################### class TestKnapsack(unittest.TestCase): """Tests for the various knapsack functions.""" ################ # Setup methods ################ def setUp(self): pass def tearDown(self): pass ################################ # Tests for firstFit() function ################################ def testFirstFit_001(self): """ Test firstFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_002(self): """ Test firstFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_003(self): """ Test firstFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_004(self): """ Test firstFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_005(self): """ Test firstFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100000, result[1]) def testFirstFit_006(self): """ Test firstFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) def testFirstFit_007(self): """ Test firstFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testFirstFit_008(self): """ Test firstFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) def testFirstFit_009(self): """ Test firstFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = firstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testFirstFit_010(self): """ Test firstFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testFirstFit_011(self): """ Test firstFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testFirstFit_012(self): """ Test firstFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testFirstFit_013(self): """ Test firstFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testFirstFit_014(self): """ Test firstFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testFirstFit_015(self): """ Test firstFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testFirstFit_016(self): """ Test firstFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(206000, result[1]) def testFirstFit_017(self): """ Test firstFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = firstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) # Unfortunately, can't test any more than this, since dict keys come out in random order ############################### # Tests for bestFit() function ############################### def testBestFit_001(self): """ Test bestFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_002(self): """ Test bestFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_003(self): """ Test bestFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_004(self): """ Test bestFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_005(self): """ Test bestFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100000, result[1]) def testBestFit_006(self): """ Test bestFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) def testBestFit_007(self): """ Test bestFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testBestFit_008(self): """ Test bestFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) def testBestFit_009(self): """ Test bestFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = bestFit(items, capacity) self.failUnlessEqual(([], 0), result) def testBestFit_010(self): """ Test bestFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testBestFit_011(self): """ Test bestFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testBestFit_012(self): """ Test bestFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testBestFit_013(self): """ Test bestFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testBestFit_014(self): """ Test bestFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testBestFit_015(self): """ Test bestFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testBestFit_016(self): """ Test bestFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(206000, result[1]) def testBestFit_017(self): """ Test bestFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = bestFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(5, len(result[0])) self.failUnlessEqual(753, result[1]) self.failUnless('dir001/file001' in result[0]) self.failUnless('dir001/file002' in result[0]) self.failUnless('file002' in result[0]) self.failUnless('link001' in result[0]) self.failUnless('link002' in result[0]) ################################ # Tests for worstFit() function ################################ def testWorstFit_001(self): """ Test worstFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_002(self): """ Test worstFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_003(self): """ Test worstFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_004(self): """ Test worstFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_005(self): """ Test worstFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100000, result[1]) def testWorstFit_006(self): """ Test worstFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) def testWorstFit_007(self): """ Test worstFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testWorstFit_008(self): """ Test worstFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) def testWorstFit_009(self): """ Test worstFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = worstFit(items, capacity) self.failUnlessEqual(([], 0), result) def testWorstFit_010(self): """ Test worstFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testWorstFit_011(self): """ Test worstFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testWorstFit_012(self): """ Test worstFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testWorstFit_013(self): """ Test worstFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testWorstFit_014(self): """ Test worstFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testWorstFit_015(self): """ Test worstFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testWorstFit_016(self): """ Test worstFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(206000, result[1]) def testWorstFit_017(self): """ Test worstFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = worstFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(605, result[1]) self.failUnless('dir002/file001' in result[0]) self.failUnless('dir002/file002' in result[0]) self.failUnless('file001' in result[0]) self.failUnless('file002' in result[0]) self.failUnless('link001' in result[0]) self.failUnless('link002' in result[0]) #################################### # Tests for alternateFit() function #################################### def testAlternateFit_001(self): """ Test alternateFit() behavior for an empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_002(self): """ Test alternateFit() behavior for an empty items dictionary, non-zero capacity. """ items = buildItemDict(ITEMS_01) capacity = 10000 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_003(self): """ Test alternateFit() behavior for an non-empty items dictionary, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_04) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_13) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_004(self): """ Test alternateFit() behavior for non-empty items dictionary with zero-sized items, zero capacity. """ items = buildItemDict(ITEMS_03) capacity = 0 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_005(self): """ Test alternateFit() behavior for items dictionary where only one item fits. """ items = buildItemDict(ITEMS_05) capacity = 1 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1, result[1]) items = buildItemDict(ITEMS_06) capacity = 10 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10, result[1]) items = buildItemDict(ITEMS_07) capacity = 100 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(1000, result[1]) items = buildItemDict(ITEMS_09) capacity = 10000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(10000, result[1]) items = buildItemDict(ITEMS_10) capacity = 100000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(1, len(result[0])) self.failUnlessEqual(100000, result[1]) def testAlternateFit_006(self): """ Test alternateFit() behavior for items dictionary where only 25% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 2 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_06) capacity = 25 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_07) capacity = 250 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_08) capacity = 2500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) items = buildItemDict(ITEMS_09) capacity = 25000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20000, result[1]) items = buildItemDict(ITEMS_10) capacity = 250000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200000, result[1]) items = buildItemDict(ITEMS_11) capacity = 2 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2, result[1]) items = buildItemDict(ITEMS_12) capacity = 25 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(20, result[1]) items = buildItemDict(ITEMS_13) capacity = 250 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(200, result[1]) items = buildItemDict(ITEMS_14) capacity = 2500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(2, len(result[0])) self.failUnlessEqual(2000, result[1]) def testAlternateFit_007(self): """ Test alternateFit() behavior for items dictionary where only 50% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 4 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_06) capacity = 45 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_07) capacity = 450 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_08) capacity = 4500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) items = buildItemDict(ITEMS_09) capacity = 45000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40000, result[1]) items = buildItemDict(ITEMS_10) capacity = 450000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400000, result[1]) items = buildItemDict(ITEMS_11) capacity = 4 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 45 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 450 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 4500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testAlternateFit_008(self): """ Test alternateFit() behavior for items dictionary where only 75% of items fit. """ items = buildItemDict(ITEMS_05) capacity = 6 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_06) capacity = 65 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_07) capacity = 650 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_08) capacity = 6500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) items = buildItemDict(ITEMS_09) capacity = 65000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60000, result[1]) items = buildItemDict(ITEMS_10) capacity = 650000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600000, result[1]) items = buildItemDict(ITEMS_15) capacity = 7 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6, result[1]) items = buildItemDict(ITEMS_16) capacity = 65 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(60, result[1]) items = buildItemDict(ITEMS_17) capacity = 650 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(600, result[1]) items = buildItemDict(ITEMS_18) capacity = 6500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(6000, result[1]) def testAlternateFit_009(self): """ Test alternateFit() behavior for items dictionary where all items individually exceed the capacity. """ items = buildItemDict(ITEMS_06) capacity = 9 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_07) capacity = 99 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_08) capacity = 999 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_09) capacity = 9999 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) items = buildItemDict(ITEMS_10) capacity = 99999 result = alternateFit(items, capacity) self.failUnlessEqual(([], 0), result) def testAlternateFit_010(self): """ Test alternateFit() behavior for items dictionary where first half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 200 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testAlternateFit_011(self): """ Test alternateFit() behavior for items dictionary where middle half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 5 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4, result[1]) items = buildItemDict(ITEMS_12) capacity = 50 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(40, result[1]) items = buildItemDict(ITEMS_13) capacity = 500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(400, result[1]) items = buildItemDict(ITEMS_14) capacity = 5000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(4000, result[1]) def testAlternateFit_012(self): """ Test alternateFit() behavior for items dictionary where second half of items individually exceed capacity and remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 200 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(4, len(result[0])) self.failUnlessEqual(111, result[1]) def testAlternateFit_013(self): """ Test alternateFit() behavior for items dictionary where first half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_04) capacity = 50 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testAlternateFit_014(self): """ Test alternateFit() behavior for items dictionary where middle half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_11) capacity = 3 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_12) capacity = 35 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_13) capacity = 350 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) items = buildItemDict(ITEMS_14) capacity = 3500 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testAlternateFit_015(self): """ Test alternateFit() behavior for items dictionary where second half of items individually exceed capacity and only some of remainder fit. """ items = buildItemDict(ITEMS_03) capacity = 50 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnless(len(result[0]) < 4, "%s < 4" % len(result[0])) def testAlternateFit_016(self): """ Test alternateFit() behavior for items dictionary where all items fit. """ items = buildItemDict(ITEMS_02) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(0, result[1]) items = buildItemDict(ITEMS_03) capacity = 2000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_04) capacity = 2000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(1111111, result[1]) items = buildItemDict(ITEMS_05) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8, result[1]) items = buildItemDict(ITEMS_06) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80, result[1]) items = buildItemDict(ITEMS_07) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800, result[1]) items = buildItemDict(ITEMS_08) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(8000, result[1]) items = buildItemDict(ITEMS_09) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(80000, result[1]) items = buildItemDict(ITEMS_10) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(800000, result[1]) items = buildItemDict(ITEMS_11) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400004, result[1]) items = buildItemDict(ITEMS_12) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400040, result[1]) items = buildItemDict(ITEMS_13) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(400400, result[1]) items = buildItemDict(ITEMS_14) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(404000, result[1]) items = buildItemDict(ITEMS_15) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200006, result[1]) items = buildItemDict(ITEMS_16) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200060, result[1]) items = buildItemDict(ITEMS_17) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(200600, result[1]) items = buildItemDict(ITEMS_18) capacity = 1000000 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(8, len(result[0])) self.failUnlessEqual(206000, result[1]) def testAlternateFit_017(self): """ Test alternateFit() behavior for a more realistic set of items """ items = buildItemDict(ITEMS_19) capacity = 760 result = alternateFit(items, capacity) self.failUnless(result[1] <= capacity, "%s <= %s" % (result[1], capacity)) self.failUnlessEqual(6, len(result[0])) self.failUnlessEqual(719, result[1]) self.failUnless('link001' in result[0]) self.failUnless('dir001/file002' in result[0]) self.failUnless('link002' in result[0]) self.failUnless('dir001/file001' in result[0]) self.failUnless('dir002/file002' in result[0]) self.failUnless('dir002/file001' in result[0]) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" return unittest.TestSuite(( unittest.makeSuite(TestKnapsack, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/testcase/encrypttests.py0000664000175000017500000017550312560016766022536 0ustar pronovicpronovic00000000000000#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # vim: set ft=python ts=3 sw=3 expandtab: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # C E D A R # S O L U T I O N S "Software done right." # S O F T W A R E # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Copyright (c) 2007,2010 Kenneth J. Pronovici. # All rights reserved. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License, # Version 2, as published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copies of the GNU General Public License are available from # the Free Software Foundation website, http://www.gnu.org/. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Author : Kenneth J. Pronovici # Language : Python 2 (>= 2.7) # Project : Cedar Backup, release 2 # Purpose : Tests encrypt extension functionality. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # ######################################################################## # Module documentation ######################################################################## """ Unit tests for CedarBackup2/extend/encrypt.py. Code Coverage ============= This module contains individual tests for the the public classes implemented in extend/encrypt.py. There are also tests for some of the private functions. Naming Conventions ================== I prefer to avoid large unit tests which validate more than one piece of functionality, and I prefer to avoid using overly descriptive (read: long) test names, as well. Instead, I use lots of very small tests that each validate one specific thing. These small tests are then named with an index number, yielding something like C{testAddDir_001} or C{testValidate_010}. Each method has a docstring describing what it's supposed to accomplish. I feel that this makes it easier to judge how important a given failure is, and also makes it somewhat easier to diagnose and fix individual problems. Testing XML Extraction ====================== It's difficult to validated that generated XML is exactly "right", especially when dealing with pretty-printed XML. We can't just provide a constant string and say "the result must match this". Instead, what we do is extract a node, build some XML from it, and then feed that XML back into another object's constructor. If that parse process succeeds and the old object is equal to the new object, we assume that the extract was successful. It would arguably be better if we could do a completely independent check - but implementing that check would be equivalent to re-implementing all of the existing functionality that we're validating here! After all, the most important thing is that data can move seamlessly from object to XML document and back to object. Full vs. Reduced Tests ====================== Some Cedar Backup regression tests require a specialized environment in order to run successfully. This environment won't necessarily be available on every build system out there (for instance, on a Debian autobuilder). Because of this, the default behavior is to run a "reduced feature set" test suite that has no surprising system, kernel or network requirements. If you want to run all of the tests, set ENCRYPTTESTS_FULL to "Y" in the environment. In this module, the primary dependency is that for some tests, GPG must have access to the public key EFD75934. There is also an assumption that GPG does I{not} have access to a public key for anyone named "Bogus J. User" (so we can test failure scenarios). @author Kenneth J. Pronovici """ ######################################################################## # Import modules and do runtime validations ######################################################################## # System modules import unittest import os import tempfile # Cedar Backup modules from CedarBackup2.filesystem import FilesystemList from CedarBackup2.testutil import findResources, buildPath, removedir, extractTar, failUnlessAssignRaises, platformSupportsLinks from CedarBackup2.xmlutil import createOutputDom, serializeDom from CedarBackup2.extend.encrypt import LocalConfig, EncryptConfig from CedarBackup2.extend.encrypt import _encryptFileWithGpg, _encryptFile, _encryptDailyDir ####################################################################### # Module-wide configuration and constants ####################################################################### DATA_DIRS = [ "./data", "./testcase/data", ] RESOURCES = [ "encrypt.conf.1", "encrypt.conf.2", "tree1.tar.gz", "tree2.tar.gz", "tree8.tar.gz", "tree15.tar.gz", "tree16.tar.gz", "tree17.tar.gz", "tree18.tar.gz", "tree19.tar.gz", "tree20.tar.gz", ] VALID_GPG_RECIPIENT = "EFD75934" INVALID_GPG_RECIPIENT = "Bogus J. User" INVALID_PATH = "bogus" # This path name should never exist ####################################################################### # Utility functions ####################################################################### def runAllTests(): """Returns true/false depending on whether the full test suite should be run.""" if "ENCRYPTTESTS_FULL" in os.environ: return os.environ["ENCRYPTTESTS_FULL"] == "Y" else: return False ####################################################################### # Test Case Classes ####################################################################### ########################## # TestEncryptConfig class ########################## class TestEncryptConfig(unittest.TestCase): """Tests for the EncryptConfig class.""" ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = EncryptConfig() obj.__repr__() obj.__str__() ################################## # Test constructor and attributes ################################## def testConstructor_001(self): """ Test constructor with no values filled in. """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptMode) self.failUnlessEqual(None, encrypt.encryptTarget) def testConstructor_002(self): """ Test constructor with all values filled in, with valid values. """ encrypt = EncryptConfig("gpg", "Backup User") self.failUnlessEqual("gpg", encrypt.encryptMode) self.failUnlessEqual("Backup User", encrypt.encryptTarget) def testConstructor_003(self): """ Test assignment of encryptMode attribute, None value. """ encrypt = EncryptConfig(encryptMode="gpg") self.failUnlessEqual("gpg", encrypt.encryptMode) encrypt.encryptMode = None self.failUnlessEqual(None, encrypt.encryptMode) def testConstructor_004(self): """ Test assignment of encryptMode attribute, valid value. """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptMode) encrypt.encryptMode = "gpg" self.failUnlessEqual("gpg", encrypt.encryptMode) def testConstructor_005(self): """ Test assignment of encryptMode attribute, invalid value (empty). """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptMode) self.failUnlessAssignRaises(ValueError, encrypt, "encryptMode", "") self.failUnlessEqual(None, encrypt.encryptMode) def testConstructor_006(self): """ Test assignment of encryptTarget attribute, None value. """ encrypt = EncryptConfig(encryptTarget="Backup User") self.failUnlessEqual("Backup User", encrypt.encryptTarget) encrypt.encryptTarget = None self.failUnlessEqual(None, encrypt.encryptTarget) def testConstructor_007(self): """ Test assignment of encryptTarget attribute, valid value. """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptTarget) encrypt.encryptTarget = "Backup User" self.failUnlessEqual("Backup User", encrypt.encryptTarget) def testConstructor_008(self): """ Test assignment of encryptTarget attribute, invalid value (empty). """ encrypt = EncryptConfig() self.failUnlessEqual(None, encrypt.encryptTarget) self.failUnlessAssignRaises(ValueError, encrypt, "encryptTarget", "") self.failUnlessEqual(None, encrypt.encryptTarget) ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ encrypt1 = EncryptConfig() encrypt2 = EncryptConfig() self.failUnlessEqual(encrypt1, encrypt2) self.failUnless(encrypt1 == encrypt2) self.failUnless(not encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(encrypt1 >= encrypt2) self.failUnless(not encrypt1 != encrypt2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ encrypt1 = EncryptConfig("gpg", "Backup User") encrypt2 = EncryptConfig("gpg", "Backup User") self.failUnlessEqual(encrypt1, encrypt2) self.failUnless(encrypt1 == encrypt2) self.failUnless(not encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(encrypt1 >= encrypt2) self.failUnless(not encrypt1 != encrypt2) def testComparison_003(self): """ Test comparison of two differing objects, encryptMode differs (one None). """ encrypt1 = EncryptConfig() encrypt2 = EncryptConfig(encryptMode="gpg") self.failIfEqual(encrypt1, encrypt2) self.failUnless(not encrypt1 == encrypt2) self.failUnless(encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(not encrypt1 >= encrypt2) self.failUnless(encrypt1 != encrypt2) # Note: no test to check when encrypt mode differs, since only one value is allowed def testComparison_004(self): """ Test comparison of two differing objects, encryptTarget differs (one None). """ encrypt1 = EncryptConfig() encrypt2 = EncryptConfig(encryptTarget="Backup User") self.failIfEqual(encrypt1, encrypt2) self.failUnless(not encrypt1 == encrypt2) self.failUnless(encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(not encrypt1 >= encrypt2) self.failUnless(encrypt1 != encrypt2) def testComparison_005(self): """ Test comparison of two differing objects, encryptTarget differs. """ encrypt1 = EncryptConfig("gpg", "Another User") encrypt2 = EncryptConfig("gpg", "Backup User") self.failIfEqual(encrypt1, encrypt2) self.failUnless(not encrypt1 == encrypt2) self.failUnless(encrypt1 < encrypt2) self.failUnless(encrypt1 <= encrypt2) self.failUnless(not encrypt1 > encrypt2) self.failUnless(not encrypt1 >= encrypt2) self.failUnless(encrypt1 != encrypt2) ######################## # TestLocalConfig class ######################## class TestLocalConfig(unittest.TestCase): """Tests for the LocalConfig class.""" ################ # Setup methods ################ def setUp(self): try: self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): pass ################## # Utility methods ################## def failUnlessAssignRaises(self, exception, obj, prop, value): """Equivalent of L{failUnlessRaises}, but used for property assignments instead.""" failUnlessAssignRaises(self, exception, obj, prop, value) def validateAddConfig(self, origConfig): """ Validates that document dumped from C{LocalConfig.addConfig} results in identical object. We dump a document containing just the encrypt configuration, and then make sure that if we push that document back into the C{LocalConfig} object, that the resulting object matches the original. The C{self.failUnlessEqual} method is used for the validation, so if the method call returns normally, everything is OK. @param origConfig: Original configuration. """ (xmlDom, parentNode) = createOutputDom() origConfig.addConfig(xmlDom, parentNode) xmlData = serializeDom(xmlDom) newConfig = LocalConfig(xmlData=xmlData, validate=False) self.failUnlessEqual(origConfig, newConfig) ############################ # Test __repr__ and __str__ ############################ def testStringFuncs_001(self): """ Just make sure that the string functions don't have errors (i.e. bad variable names). """ obj = LocalConfig() obj.__repr__() obj.__str__() ##################################################### # Test basic constructor and attribute functionality ##################################################### def testConstructor_001(self): """ Test empty constructor, validate=False. """ config = LocalConfig(validate=False) self.failUnlessEqual(None, config.encrypt) def testConstructor_002(self): """ Test empty constructor, validate=True. """ config = LocalConfig(validate=True) self.failUnlessEqual(None, config.encrypt) def testConstructor_003(self): """ Test with empty config document as both data and file, validate=False. """ path = self.resources["encrypt.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, xmlPath=path, validate=False) def testConstructor_004(self): """ Test assignment of encrypt attribute, None value. """ config = LocalConfig() config.encrypt = None self.failUnlessEqual(None, config.encrypt) def testConstructor_005(self): """ Test assignment of encrypt attribute, valid value. """ config = LocalConfig() config.encrypt = EncryptConfig() self.failUnlessEqual(EncryptConfig(), config.encrypt) def testConstructor_006(self): """ Test assignment of encrypt attribute, invalid value (not EncryptConfig). """ config = LocalConfig() self.failUnlessAssignRaises(ValueError, config, "encrypt", "STRING!") ############################ # Test comparison operators ############################ def testComparison_001(self): """ Test comparison of two identical objects, all attributes None. """ config1 = LocalConfig() config2 = LocalConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_002(self): """ Test comparison of two identical objects, all attributes non-None. """ config1 = LocalConfig() config1.encrypt = EncryptConfig() config2 = LocalConfig() config2.encrypt = EncryptConfig() self.failUnlessEqual(config1, config2) self.failUnless(config1 == config2) self.failUnless(not config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(config1 >= config2) self.failUnless(not config1 != config2) def testComparison_003(self): """ Test comparison of two differing objects, encrypt differs (one None). """ config1 = LocalConfig() config2 = LocalConfig() config2.encrypt = EncryptConfig() self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) def testComparison_004(self): """ Test comparison of two differing objects, encrypt differs. """ config1 = LocalConfig() config1.encrypt = EncryptConfig(encryptTarget="Another User") config2 = LocalConfig() config2.encrypt = EncryptConfig(encryptTarget="Backup User") self.failIfEqual(config1, config2) self.failUnless(not config1 == config2) self.failUnless(config1 < config2) self.failUnless(config1 <= config2) self.failUnless(not config1 > config2) self.failUnless(not config1 >= config2) self.failUnless(config1 != config2) ###################### # Test validate logic ###################### def testValidate_001(self): """ Test validate on a None encrypt section. """ config = LocalConfig() config.encrypt = None self.failUnlessRaises(ValueError, config.validate) def testValidate_002(self): """ Test validate on an empty encrypt section. """ config = LocalConfig() config.encrypt = EncryptConfig() self.failUnlessRaises(ValueError, config.validate) def testValidate_003(self): """ Test validate on a non-empty encrypt section with no values filled in. """ config = LocalConfig() config.encrypt = EncryptConfig(None, None) self.failUnlessRaises(ValueError, config.validate) def testValidate_004(self): """ Test validate on a non-empty encrypt section with only one value filled in. """ config = LocalConfig() config.encrypt = EncryptConfig("gpg", None) self.failUnlessRaises(ValueError, config.validate) config.encrypt = EncryptConfig(None, "Backup User") self.failUnlessRaises(ValueError, config.validate) def testValidate_005(self): """ Test validate on a non-empty encrypt section with valid values filled in. """ config = LocalConfig() config.encrypt = EncryptConfig("gpg", "Backup User") config.validate() ############################ # Test parsing of documents ############################ def testParse_001(self): """ Parse empty config document. """ path = self.resources["encrypt.conf.1"] contents = open(path).read() self.failUnlessRaises(ValueError, LocalConfig, xmlPath=path, validate=True) self.failUnlessRaises(ValueError, LocalConfig, xmlData=contents, validate=True) config = LocalConfig(xmlPath=path, validate=False) self.failUnlessEqual(None, config.encrypt) config = LocalConfig(xmlData=contents, validate=False) self.failUnlessEqual(None, config.encrypt) def testParse_002(self): """ Parse config document with filled-in values. """ path = self.resources["encrypt.conf.2"] contents = open(path).read() config = LocalConfig(xmlPath=path, validate=False) self.failIfEqual(None, config.encrypt) self.failUnlessEqual("gpg", config.encrypt.encryptMode) self.failUnlessEqual("Backup User", config.encrypt.encryptTarget) config = LocalConfig(xmlData=contents, validate=False) self.failIfEqual(None, config.encrypt) self.failUnlessEqual("gpg", config.encrypt.encryptMode) self.failUnlessEqual("Backup User", config.encrypt.encryptTarget) ################### # Test addConfig() ################### def testAddConfig_001(self): """ Test with empty config document. """ encrypt = EncryptConfig() config = LocalConfig() config.encrypt = encrypt self.validateAddConfig(config) def testAddConfig_002(self): """ Test with values set. """ encrypt = EncryptConfig(encryptMode="gpg", encryptTarget="Backup User") config = LocalConfig() config.encrypt = encrypt self.validateAddConfig(config) ###################### # TestFunctions class ###################### class TestFunctions(unittest.TestCase): """Tests for the functions in encrypt.py.""" ################ # Setup methods ################ def setUp(self): try: self.tmpdir = tempfile.mkdtemp() self.resources = findResources(RESOURCES, DATA_DIRS) except Exception, e: self.fail(e) def tearDown(self): try: removedir(self.tmpdir) except: pass ################## # Utility methods ################## def extractTar(self, tarname): """Extracts a tarfile with a particular name.""" extractTar(self.tmpdir, self.resources['%s.tar.gz' % tarname]) def buildPath(self, components): """Builds a complete search path from a list of components.""" components.insert(0, self.tmpdir) return buildPath(components) ############################# # Test _encryptFileWithGpg() ############################# def testEncryptFileWithGpg_001(self): """ Test for a non-existent file in a non-existent directory. """ sourceFile = self.buildPath([INVALID_PATH, INVALID_PATH]) self.failUnlessRaises(IOError, _encryptFileWithGpg, sourceFile, INVALID_GPG_RECIPIENT) def testEncryptFileWithGpg_002(self): """ Test for a non-existent file in an existing directory. """ self.extractTar("tree8") sourceFile = self.buildPath(["tree8", "dir001", INVALID_PATH, ]) self.failUnlessRaises(IOError, _encryptFileWithGpg, sourceFile, INVALID_GPG_RECIPIENT) def testEncryptFileWithGpg_003(self): """ Test for an unknown recipient. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.failUnlessRaises(IOError, _encryptFileWithGpg, sourceFile, INVALID_GPG_RECIPIENT) self.failIf(os.path.exists(expectedFile)) self.failUnless(os.path.exists(sourceFile)) def testEncryptFileWithGpg_004(self): """ Test for a valid recipient. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) actualFile = _encryptFileWithGpg(sourceFile, VALID_GPG_RECIPIENT) self.failUnlessEqual(actualFile, expectedFile) self.failUnless(os.path.exists(sourceFile)) self.failUnless(os.path.exists(actualFile)) ###################### # Test _encryptFile() ###################### def testEncryptFile_001(self): """ Test for a mode other than "gpg". """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.failUnlessRaises(ValueError, _encryptFile, sourceFile, "pgp", INVALID_GPG_RECIPIENT, None, None, removeSource=False) self.failUnless(os.path.exists(sourceFile)) self.failIf(os.path.exists(expectedFile)) def testEncryptFile_002(self): """ Test for a source path that does not exist. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", INVALID_PATH ]) expectedFile = self.buildPath(["tree1", "%s.gpg" % INVALID_PATH ]) self.failUnlessRaises(ValueError, _encryptFile, sourceFile, "gpg", INVALID_GPG_RECIPIENT, None, None, removeSource=False) self.failIf(os.path.exists(sourceFile)) self.failIf(os.path.exists(expectedFile)) def testEncryptFile_003(self): """ Test "gpg" mode with a valid source path and invalid recipient, removeSource=False. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.failUnlessRaises(IOError, _encryptFile, sourceFile, "gpg", INVALID_GPG_RECIPIENT, None, None, removeSource=False) self.failUnless(os.path.exists(sourceFile)) self.failIf(os.path.exists(expectedFile)) def testEncryptFile_004(self): """ Test "gpg" mode with a valid source path and invalid recipient, removeSource=True. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) self.failUnlessRaises(IOError, _encryptFile, sourceFile, "gpg", INVALID_GPG_RECIPIENT, None, None, removeSource=True) self.failUnless(os.path.exists(sourceFile)) self.failIf(os.path.exists(expectedFile)) def testEncryptFile_005(self): """ Test "gpg" mode with a valid source path and recipient, removeSource=False. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) actualFile = _encryptFile(sourceFile, "gpg", VALID_GPG_RECIPIENT, None, None, removeSource=False) self.failUnlessEqual(actualFile, expectedFile) self.failUnless(os.path.exists(sourceFile)) self.failUnless(os.path.exists(actualFile)) def testEncryptFile_006(self): """ Test "gpg" mode with a valid source path and recipient, removeSource=True. """ self.extractTar("tree1") sourceFile = self.buildPath(["tree1", "file001" ]) expectedFile = self.buildPath(["tree1", "file001.gpg" ]) actualFile = _encryptFile(sourceFile, "gpg", VALID_GPG_RECIPIENT, None, None, removeSource=True) self.failUnlessEqual(actualFile, expectedFile) self.failIf(os.path.exists(sourceFile)) self.failUnless(os.path.exists(actualFile)) ########################## # Test _encryptDailyDir() ########################## def testEncryptDailyDir_001(self): """ Test with a nonexistent daily staging directory. """ self.extractTar("tree1") dailyDir = self.buildPath(["tree1", "dir001" ]) self.failUnlessRaises(ValueError, _encryptDailyDir, dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) def testEncryptDailyDir_002(self): """ Test with a valid staging directory containing only links. """ if platformSupportsLinks(): self.extractTar("tree15") dailyDir = self.buildPath(["tree15", "dir001" ]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath(["tree15", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree15", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree15", "dir001", "link002", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(3, len(fsList)) self.failUnless(self.buildPath(["tree15", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree15", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree15", "dir001", "link002", ]) in fsList) def testEncryptDailyDir_003(self): """ Test with a valid staging directory containing only directories. """ self.extractTar("tree2") dailyDir = self.buildPath(["tree2"]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath(["tree2", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir010", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(11, len(fsList)) self.failUnless(self.buildPath(["tree2", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir006", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir007", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir008", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir009", ]) in fsList) self.failUnless(self.buildPath(["tree2", "dir010", ]) in fsList) def testEncryptDailyDir_004(self): """ Test with a valid staging directory containing only files. """ self.extractTar("tree1") dailyDir = self.buildPath(["tree1"]) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree1" ]) in fsList) self.failUnless(self.buildPath(["tree1", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file007", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) self.failUnlessEqual(8, len(fsList)) self.failUnless(self.buildPath(["tree1" ]) in fsList) self.failUnless(self.buildPath(["tree1", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree1", "file007.gpg", ]) in fsList) def testEncryptDailyDir_005(self): """ Test with a valid staging directory containing files, directories and links, including various files that match the general Cedar Backup indicator file pattern ("cback."). """ self.extractTar("tree16") dailyDir = self.buildPath(["tree16"]) fsList = FilesystemList() fsList.addDirContents(dailyDir) if platformSupportsLinks(): self.failUnlessEqual(122, len(fsList)) self.failUnless(self.buildPath(["tree16", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "link005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "link002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "link001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.stage", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.store", ]) in fsList) else: self.failUnlessEqual(102, len(fsList)) self.failUnless(self.buildPath(["tree16", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file006", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file007", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file008", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.stage", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.store", ]) in fsList) _encryptDailyDir(dailyDir, "gpg", VALID_GPG_RECIPIENT, None, None) fsList = FilesystemList() fsList.addDirContents(dailyDir) # since all links are to files, and the files all changed names, the links are invalid and disappear self.failUnlessEqual(102, len(fsList)) self.failUnless(self.buildPath(["tree16", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir001", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir002", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir001", "dir003", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir001", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir002", "dir002", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir003", "dir003", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir001", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir002", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir003", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "cback.encrypt", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir004", "cback.store", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file001.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file002.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file003.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file004.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file005.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file006.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file007.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "dir004", "dir005", "file008.gpg", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.collect", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.stage", ]) in fsList) self.failUnless(self.buildPath(["tree16", "cback.store", ]) in fsList) ####################################################################### # Suite definition ####################################################################### # pylint: disable=C0330 def suite(): """Returns a suite containing all the test cases in this module.""" if runAllTests(): return unittest.TestSuite(( unittest.makeSuite(TestEncryptConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), unittest.makeSuite(TestFunctions, 'test'), )) else: return unittest.TestSuite(( unittest.makeSuite(TestEncryptConfig, 'test'), unittest.makeSuite(TestLocalConfig, 'test'), )) ######################################################################## # Module entry point ######################################################################## # When this module is executed from the command-line, run its tests if __name__ == '__main__': unittest.main() CedarBackup2-2.26.5/doc/0002775000175000017500000000000012642035650016330 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/doc/docbook.txt0000664000175000017500000000426412555052642020520 0ustar pronovicpronovic00000000000000The Cedar Backup Software Manual, found in manual/src, is written in DocBook Lite. All of the docbook functionality used to build the actual documentation that I distribute is based around a Debian system (or a system with equivalent functionality) as the development system. I built the entire docbook infrastructure based on the Subversion book: http://svnbook.red-bean.com http://svn.collab.net/repos/svn/branches/1.0.x/doc/book/ Some other links that might be useful to you: http://www.sagehill.net/docbookxsl/index.html http://tldp.org/HOWTO/DocBook-Demystification-HOWTO/index.html http://www.vim.org/scripts/script.php?script_id=301 This is the official Docbook XSL documentation. http://wiki.docbook.org/topic/ http://wiki.docbook.org/topic/DocBookDocumentation http://wiki.docbook.org/topic/DocBookXslStylesheetDocs http://docbook.sourceforge.net/release/xsl/current/doc/fo/ These official Docbook documentation is where you want to look for stylesheet options, etc. For instance, these are the docs I used when I wanted to figure out how to put items on new pages in PDF output. The following items need to be installed to build the user manual: apt-get install docbook-xsl apt-get install xsltproc apt-get install fop apt-get install sp # for nsgmls Then, to make images work from within PDF, you need to get the Jimi image library: get jimi1_0.tar.Z from http://java.sun.com/products/jimi/ tar -Zxvf jimi1_0.tar.Z cp Jimi/examples/AppletDemo/JimiProClasses.jar /usr/share/java/jimi-1.0.jar You also need a working XML catalog on your system, because the various DTDs and stylesheets depend on that. There's no point in harcoding paths and keeping local copies of things if the catalog can do that for you. However, if you don't have a catalog, you can probably force things to work. See notes at the top of the various files in util/docbook. The util/validate script is a thin wrapper around the nsgmls validating parser. I took the syntax directly from the Subversion book documentation. http://svn.collab.net/repos/svn/branches/1.0.x/doc/book/README You should run 'make validate' against the manual before checking it in. CedarBackup2-2.26.5/doc/osx/0002775000175000017500000000000012642035650017141 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/doc/osx/stop-automount0000775000175000017500000000070012555052642022103 0ustar pronovicpronovic00000000000000#!/bin/sh # Script to stop the Mac OS X auto mount daemon so we can use cdrtools. # Swiped from online documentation related to X-CD-Roast and reformatted. # Note: this daemon was apparently called autodiskmount in OS X 10.3 and prior. sudo kill -STOP `ps -ax | grep diskarbitrationd | grep -v grep | sed -e 's/\([^\?]*\).*/\1/' ` echo "Auto mount process ID `ps -ax | grep diskarbitrationd | grep -v grep | sed -e 's/\([^\?]*\).*/\1/' ` stopped." CedarBackup2-2.26.5/doc/osx/notes.txt0000664000175000017500000000324112555052642021033 0ustar pronovicpronovic00000000000000Mac os x notes Tested with my new (August 2005) iBook G4 running 10.4 (Tiger). 1 collect works fine 2 stage works fine 3 purge works fine 4 store has some issues - the code all works, but you end up really having to fight the OS so it gets allowed to work a. the drive identifies itself as having a tray, but doesn't b. the Fink eject program doesn't really work (it hangs) c. OS X insists on having control of every disc via the Finder Users will have to put in a dummy override for eject, maybe to /bin/echo or something, for the write to succeed. Either that, or I'll have to put in some option to override the eject indentification for the drive (ugh!, though maybe eventually other people will need this, too?) Users will need to run a script to stop/start the automount daemon before running cback. However, beware! If you stop this daemon, the soft eject button apparently stops working! It gets worse - you can't mount the disk to do a consistency check (even using hdiutil) when the automount daemon is stopped. The utility just doesn't respond. I think that basically, we're going to have to not recommend using the store command on Mac OS X unless someone with some more expertise can help out with this. The OS just gets too much in the way. At the least, we need to document this stuff and put in some code warnings. Might want to reference XCDRoast stuff: http://www.xcdroast.org/xcdr098/xcdrosX.html The file README.macosX from the cdrtools distribution also contains some useful information in it, that we might be able to incorporate into the manual at some point. CedarBackup2-2.26.5/doc/osx/start-automount0000775000175000017500000000072012555052642022255 0ustar pronovicpronovic00000000000000#!/bin/sh # Script to restart the Mac OS X auto mount daemon once we're done using cdrtools. # Swiped from online documentation related to X-CD-Roast and reformatted. # Note: this daemon was apparently called autodiskmount in OS X 10.3 and prior. sudo kill -CONT `ps -ax | grep diskarbitrationd | grep -v grep | sed -e 's/\([^\?]*\).*/\1/' ` echo "Auto mount process ID `ps -ax | grep diskarbitrationd | grep -v grep | sed -e 's/\([^\?]*\).*/\1/' ` restarted." CedarBackup2-2.26.5/doc/cback.conf.sample0000664000175000017500000001100012555052642021513 0ustar pronovicpronovic00000000000000 Kenneth J. Pronovici 1.3 Sample sysinfo CedarBackup2.extend.sysinfo executeAction 95 mysql CedarBackup2.extend.mysql executeAction 96 postgresql CedarBackup2.extend.postgresql executeAction 97 subversion CedarBackup2.extend.subversion executeAction 98 mbox CedarBackup2.extend.mbox executeAction 99 encrypt CedarBackup2.extend.encrypt executeAction 299 tuesday /opt/backup/tmp backup group /usr/bin/scp -B cdrecord /opt/local/bin/cdrecord mkisofs /opt/local/bin/mkisofs collect echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT" collect echo "I AM A POST-ACTION HOOK RELATED TO COLLECT" /opt/backup/collect daily targz .cbignore /etc incr /home/root/.profile weekly /opt/backup/stage debian local /opt/backup/collect /opt/backup/stage cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y N weekly 5.1 /opt/backup/stage 7 /opt/backup/collect 0 mlogin bzip2 Y plogin bzip2 N db1 db2 incr bzip2 FSFS /opt/svn/repo1 BDB /opt/svn/repo2 incr bzip2 /home/user1/mail/greylist daily /home/user2/mail gzip gpg Backup User CedarBackup2-2.26.5/doc/interface/0002775000175000017500000000000012642035650020270 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.Diagnostics-class.html0000664000175000017500000007727512642035644027754 0ustar pronovicpronovic00000000000000 CedarBackup2.util.Diagnostics
    Package CedarBackup2 :: Module util :: Class Diagnostics
    [hide private]
    [frames] | no frames]

    Class Diagnostics

    source code

    object --+
             |
            Diagnostics
    

    Class holding runtime diagnostic information.

    Diagnostic information is information that is useful to get from users for debugging purposes. I'm consolidating it all here into one object.

    Instance Methods [hide private]
     
    __init__(self)
    Constructor for the Diagnostics class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    getValues(self)
    Get a map containing all of the diagnostic values.
    source code
     
    printDiagnostics(self, fd=sys.stdout, prefix='')
    Pretty-print diagnostic information to a file descriptor.
    source code
     
    logDiagnostics(self, method, prefix='')
    Pretty-print diagnostic information using a logger method.
    source code
     
    _buildDiagnosticLines(self, prefix='')
    Build a set of pretty-printed diagnostic lines.
    source code
     
    _getVersion(self)
    Property target to get the Cedar Backup version.
    source code
     
    _getInterpreter(self)
    Property target to get the Python interpreter version.
    source code
     
    _getEncoding(self)
    Property target to get the filesystem encoding.
    source code
     
    _getPlatform(self)
    Property target to get the operating system platform.
    source code
     
    _getLocale(self)
    Property target to get the default locale that is in effect.
    source code
     
    _getTimestamp(self)
    Property target to get a current date/time stamp.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _getMaxLength(values)
    Get the maximum length from among a list of strings.
    source code
    Properties [hide private]
      version
    Cedar Backup version.
      interpreter
    Python interpreter version.
      platform
    Platform identifying information.
      encoding
    Filesystem encoding that is in effect.
      locale
    Locale that is in effect.
      timestamp
    Current timestamp.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Constructor for the Diagnostics class.

    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    getValues(self)

    source code 

    Get a map containing all of the diagnostic values.

    Returns:
    Map from diagnostic name to diagnostic value.

    printDiagnostics(self, fd=sys.stdout, prefix='')

    source code 

    Pretty-print diagnostic information to a file descriptor.

    Parameters:
    • fd - File descriptor used to print information.
    • prefix - Prefix string (if any) to place onto printed lines

    Note: The fd is used rather than print to facilitate unit testing.

    logDiagnostics(self, method, prefix='')

    source code 

    Pretty-print diagnostic information using a logger method.

    Parameters:
    • method - Logger method to use for logging (i.e. logger.info)
    • prefix - Prefix string (if any) to place onto printed lines

    _buildDiagnosticLines(self, prefix='')

    source code 

    Build a set of pretty-printed diagnostic lines.

    Parameters:
    • prefix - Prefix string (if any) to place onto printed lines
    Returns:
    List of strings, not terminated by newlines.

    Property Details [hide private]

    version

    Cedar Backup version.

    Get Method:
    _getVersion(self) - Property target to get the Cedar Backup version.

    interpreter

    Python interpreter version.

    Get Method:
    _getInterpreter(self) - Property target to get the Python interpreter version.

    platform

    Platform identifying information.

    Get Method:
    _getPlatform(self) - Property target to get the operating system platform.

    encoding

    Filesystem encoding that is in effect.

    Get Method:
    _getEncoding(self) - Property target to get the filesystem encoding.

    locale

    Locale that is in effect.

    Get Method:
    _getLocale(self) - Property target to get the default locale that is in effect.

    timestamp

    Current timestamp.

    Get Method:
    _getTimestamp(self) - Property target to get a current date/time stamp.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.release-module.html0000664000175000017500000001761312642035643026316 0ustar pronovicpronovic00000000000000 CedarBackup2.release
    Package CedarBackup2 :: Module release
    [hide private]
    [frames] | no frames]

    Module release

    source code

    Provides location to maintain version information.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      AUTHOR = 'Kenneth J. Pronovici'
    Author of software.
      EMAIL = 'pronovic@ieee.org'
    Email address of author.
      COPYRIGHT = '2004-2011,2013-2016'
    Copyright date.
      VERSION = '2.26.5'
    Software version.
      DATE = '02 Jan 2016'
    Software release date.
      URL = 'https://bitbucket.org/cedarsolutions/cedar-backup2'
    URL of Cedar Backup webpage.
      __package__ = None
    hash(x)
    CedarBackup2-2.26.5/doc/interface/CedarBackup2.tools-pysrc.html0000664000175000017500000002524512642035645025713 0ustar pronovicpronovic00000000000000 CedarBackup2.tools
    Package CedarBackup2 :: Package tools
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2.tools

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Official Cedar Backup Tools 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Official Cedar Backup Tools 
    24   
    25  This package provides official Cedar Backup tools.  Tools are things that feel 
    26  a little like extensions, but don't fit the normal mold of extensions.  For 
    27  instance, they might not be intended to run from cron, or might need to interact 
    28  dynamically with the user (i.e. accept user input). 
    29   
    30  Tools are usually scripts that are run directly from the command line, just 
    31  like the main C{cback} script.  Like the C{cback} script, the majority of a 
    32  tool is implemented in a .py module, and then the script just invokes the 
    33  module's C{cli()} function.  The actual scripts for tools are distributed in 
    34  the util/ directory. 
    35   
    36  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    37  """ 
    38   
    39   
    40  ######################################################################## 
    41  # Package initialization 
    42  ######################################################################## 
    43   
    44  # Using 'from CedarBackup2.tools import *' will just import the modules listed 
    45  # in the __all__ variable. 
    46   
    47  __all__ = [ 'span', 'amazons3', ] 
    48   
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writer-pysrc.html0000664000175000017500000003543412642035644026067 0ustar pronovicpronovic00000000000000 CedarBackup2.writer
    Package CedarBackup2 :: Module writer
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.writer

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Cedar Backup, release 2 
    14  # Purpose  : Provides interface backwards compatibility. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Provides interface backwards compatibility. 
    24   
    25  In Cedar Backup 2.10.0, a refactoring effort took place while adding code to 
    26  support DVD hardware.  All of the writer functionality was moved to the 
    27  writers/ package.  This mostly-empty file remains to preserve the Cedar Backup 
    28  library interface. 
    29   
    30  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    31  """ 
    32   
    33  ######################################################################## 
    34  # Imported modules 
    35  ######################################################################## 
    36   
    37  # pylint: disable=W0611 
    38  from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed 
    39  from CedarBackup2.writers.cdwriter import MediaDefinition, MediaCapacity, CdWriter 
    40  from CedarBackup2.writers.cdwriter import MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 
    41   
    

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend.mysql-module.html0000664000175000017500000000413412642035643030106 0ustar pronovicpronovic00000000000000 mysql

    Module mysql


    Classes

    LocalConfig
    MysqlConfig

    Functions

    backupDatabase
    executeAction

    Variables

    MYSQLDUMP_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.split-pysrc.html0000664000175000017500000050071212642035646027172 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.split
    Package CedarBackup2 :: Package extend :: Module split
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.split

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007,2010,2013 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : Provides an extension to split up large files in staging directories. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to split up large files in staging directories. 
     40   
     41  When this extension is executed, it will look through the configured Cedar 
     42  Backup staging directory for files exceeding a specified size limit, and split 
     43  them down into smaller files using the 'split' utility.  Any directory which 
     44  has already been split (as indicated by the C{cback.split} file) will be 
     45  ignored. 
     46   
     47  This extension requires a new configuration section <split> and is intended 
     48  to be run immediately after the standard stage action or immediately before the 
     49  standard store action.  Aside from its own configuration, it requires the 
     50  options and staging configuration sections in the standard Cedar Backup 
     51  configuration file. 
     52   
     53  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     54  """ 
     55   
     56  ######################################################################## 
     57  # Imported modules 
     58  ######################################################################## 
     59   
     60  # System modules 
     61  import os 
     62  import re 
     63  import logging 
     64   
     65  # Cedar Backup modules 
     66  from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership 
     67  from CedarBackup2.xmlutil import createInputDom, addContainerNode 
     68  from CedarBackup2.xmlutil import readFirstChild 
     69  from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles 
     70  from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode 
     71   
     72   
     73  ######################################################################## 
     74  # Module-wide constants and variables 
     75  ######################################################################## 
     76   
     77  logger = logging.getLogger("CedarBackup2.log.extend.split") 
     78   
     79  SPLIT_COMMAND = [ "split", ] 
     80  SPLIT_INDICATOR = "cback.split" 
    
    81 82 83 ######################################################################## 84 # SplitConfig class definition 85 ######################################################################## 86 87 -class SplitConfig(object):
    88 89 """ 90 Class representing split configuration. 91 92 Split configuration is used for splitting staging directories. 93 94 The following restrictions exist on data in this class: 95 96 - The size limit must be a ByteQuantity 97 - The split size must be a ByteQuantity 98 99 @sort: __init__, __repr__, __str__, __cmp__, sizeLimit, splitSize 100 """ 101
    102 - def __init__(self, sizeLimit=None, splitSize=None):
    103 """ 104 Constructor for the C{SplitCOnfig} class. 105 106 @param sizeLimit: Size limit of the files, in bytes 107 @param splitSize: Size that files exceeding the limit will be split into, in bytes 108 109 @raise ValueError: If one of the values is invalid. 110 """ 111 self._sizeLimit = None 112 self._splitSize = None 113 self.sizeLimit = sizeLimit 114 self.splitSize = splitSize
    115
    116 - def __repr__(self):
    117 """ 118 Official string representation for class instance. 119 """ 120 return "SplitConfig(%s, %s)" % (self.sizeLimit, self.splitSize)
    121
    122 - def __str__(self):
    123 """ 124 Informal string representation for class instance. 125 """ 126 return self.__repr__()
    127
    128 - def __cmp__(self, other):
    129 """ 130 Definition of equals operator for this class. 131 Lists within this class are "unordered" for equality comparisons. 132 @param other: Other object to compare to. 133 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 134 """ 135 if other is None: 136 return 1 137 if self.sizeLimit != other.sizeLimit: 138 if self.sizeLimit < other.sizeLimit: 139 return -1 140 else: 141 return 1 142 if self.splitSize != other.splitSize: 143 if self.splitSize < other.splitSize: 144 return -1 145 else: 146 return 1 147 return 0
    148
    149 - def _setSizeLimit(self, value):
    150 """ 151 Property target used to set the size limit. 152 If not C{None}, the value must be a C{ByteQuantity} object. 153 @raise ValueError: If the value is not a C{ByteQuantity} 154 """ 155 if value is None: 156 self._sizeLimit = None 157 else: 158 if not isinstance(value, ByteQuantity): 159 raise ValueError("Value must be a C{ByteQuantity} object.") 160 self._sizeLimit = value
    161
    162 - def _getSizeLimit(self):
    163 """ 164 Property target used to get the size limit. 165 """ 166 return self._sizeLimit
    167
    168 - def _setSplitSize(self, value):
    169 """ 170 Property target used to set the split size. 171 If not C{None}, the value must be a C{ByteQuantity} object. 172 @raise ValueError: If the value is not a C{ByteQuantity} 173 """ 174 if value is None: 175 self._splitSize = None 176 else: 177 if not isinstance(value, ByteQuantity): 178 raise ValueError("Value must be a C{ByteQuantity} object.") 179 self._splitSize = value
    180
    181 - def _getSplitSize(self):
    182 """ 183 Property target used to get the split size. 184 """ 185 return self._splitSize
    186 187 sizeLimit = property(_getSizeLimit, _setSizeLimit, None, doc="Size limit, as a ByteQuantity") 188 splitSize = property(_getSplitSize, _setSplitSize, None, doc="Split size, as a ByteQuantity")
    189
    190 191 ######################################################################## 192 # LocalConfig class definition 193 ######################################################################## 194 195 -class LocalConfig(object):
    196 197 """ 198 Class representing this extension's configuration document. 199 200 This is not a general-purpose configuration object like the main Cedar 201 Backup configuration object. Instead, it just knows how to parse and emit 202 split-specific configuration values. Third parties who need to read and 203 write configuration related to this extension should access it through the 204 constructor, C{validate} and C{addConfig} methods. 205 206 @note: Lists within this class are "unordered" for equality comparisons. 207 208 @sort: __init__, __repr__, __str__, __cmp__, split, validate, addConfig 209 """ 210
    211 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    212 """ 213 Initializes a configuration object. 214 215 If you initialize the object without passing either C{xmlData} or 216 C{xmlPath} then configuration will be empty and will be invalid until it 217 is filled in properly. 218 219 No reference to the original XML data or original path is saved off by 220 this class. Once the data has been parsed (successfully or not) this 221 original information is discarded. 222 223 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 224 method will be called (with its default arguments) against configuration 225 after successfully parsing any passed-in XML. Keep in mind that even if 226 C{validate} is C{False}, it might not be possible to parse the passed-in 227 XML document if lower-level validations fail. 228 229 @note: It is strongly suggested that the C{validate} option always be set 230 to C{True} (the default) unless there is a specific need to read in 231 invalid configuration from disk. 232 233 @param xmlData: XML data representing configuration. 234 @type xmlData: String data. 235 236 @param xmlPath: Path to an XML file on disk. 237 @type xmlPath: Absolute path to a file on disk. 238 239 @param validate: Validate the document after parsing it. 240 @type validate: Boolean true/false. 241 242 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 243 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 244 @raise ValueError: If the parsed configuration document is not valid. 245 """ 246 self._split = None 247 self.split = None 248 if xmlData is not None and xmlPath is not None: 249 raise ValueError("Use either xmlData or xmlPath, but not both.") 250 if xmlData is not None: 251 self._parseXmlData(xmlData) 252 if validate: 253 self.validate() 254 elif xmlPath is not None: 255 xmlData = open(xmlPath).read() 256 self._parseXmlData(xmlData) 257 if validate: 258 self.validate()
    259
    260 - def __repr__(self):
    261 """ 262 Official string representation for class instance. 263 """ 264 return "LocalConfig(%s)" % (self.split)
    265
    266 - def __str__(self):
    267 """ 268 Informal string representation for class instance. 269 """ 270 return self.__repr__()
    271
    272 - def __cmp__(self, other):
    273 """ 274 Definition of equals operator for this class. 275 Lists within this class are "unordered" for equality comparisons. 276 @param other: Other object to compare to. 277 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 278 """ 279 if other is None: 280 return 1 281 if self.split != other.split: 282 if self.split < other.split: 283 return -1 284 else: 285 return 1 286 return 0
    287
    288 - def _setSplit(self, value):
    289 """ 290 Property target used to set the split configuration value. 291 If not C{None}, the value must be a C{SplitConfig} object. 292 @raise ValueError: If the value is not a C{SplitConfig} 293 """ 294 if value is None: 295 self._split = None 296 else: 297 if not isinstance(value, SplitConfig): 298 raise ValueError("Value must be a C{SplitConfig} object.") 299 self._split = value
    300
    301 - def _getSplit(self):
    302 """ 303 Property target used to get the split configuration value. 304 """ 305 return self._split
    306 307 split = property(_getSplit, _setSplit, None, "Split configuration in terms of a C{SplitConfig} object.") 308
    309 - def validate(self):
    310 """ 311 Validates configuration represented by the object. 312 313 Split configuration must be filled in. Within that, both the size limit 314 and split size must be filled in. 315 316 @raise ValueError: If one of the validations fails. 317 """ 318 if self.split is None: 319 raise ValueError("Split section is required.") 320 if self.split.sizeLimit is None: 321 raise ValueError("Size limit must be set.") 322 if self.split.splitSize is None: 323 raise ValueError("Split size must be set.")
    324
    325 - def addConfig(self, xmlDom, parentNode):
    326 """ 327 Adds a <split> configuration section as the next child of a parent. 328 329 Third parties should use this function to write configuration related to 330 this extension. 331 332 We add the following fields to the document:: 333 334 sizeLimit //cb_config/split/size_limit 335 splitSize //cb_config/split/split_size 336 337 @param xmlDom: DOM tree as from C{impl.createDocument()}. 338 @param parentNode: Parent that the section should be appended to. 339 """ 340 if self.split is not None: 341 sectionNode = addContainerNode(xmlDom, parentNode, "split") 342 addByteQuantityNode(xmlDom, sectionNode, "size_limit", self.split.sizeLimit) 343 addByteQuantityNode(xmlDom, sectionNode, "split_size", self.split.splitSize)
    344
    345 - def _parseXmlData(self, xmlData):
    346 """ 347 Internal method to parse an XML string into the object. 348 349 This method parses the XML document into a DOM tree (C{xmlDom}) and then 350 calls a static method to parse the split configuration section. 351 352 @param xmlData: XML data to be parsed 353 @type xmlData: String data 354 355 @raise ValueError: If the XML cannot be successfully parsed. 356 """ 357 (xmlDom, parentNode) = createInputDom(xmlData) 358 self._split = LocalConfig._parseSplit(parentNode)
    359 360 @staticmethod
    361 - def _parseSplit(parent):
    362 """ 363 Parses an split configuration section. 364 365 We read the following individual fields:: 366 367 sizeLimit //cb_config/split/size_limit 368 splitSize //cb_config/split/split_size 369 370 @param parent: Parent node to search beneath. 371 372 @return: C{EncryptConfig} object or C{None} if the section does not exist. 373 @raise ValueError: If some filled-in value is invalid. 374 """ 375 split = None 376 section = readFirstChild(parent, "split") 377 if section is not None: 378 split = SplitConfig() 379 split.sizeLimit = readByteQuantity(section, "size_limit") 380 split.splitSize = readByteQuantity(section, "split_size") 381 return split
    382
    383 384 ######################################################################## 385 # Public functions 386 ######################################################################## 387 388 ########################### 389 # executeAction() function 390 ########################### 391 392 -def executeAction(configPath, options, config):
    393 """ 394 Executes the split backup action. 395 396 @param configPath: Path to configuration file on disk. 397 @type configPath: String representing a path on disk. 398 399 @param options: Program command-line options. 400 @type options: Options object. 401 402 @param config: Program configuration. 403 @type config: Config object. 404 405 @raise ValueError: Under many generic error conditions 406 @raise IOError: If there are I/O problems reading or writing files 407 """ 408 logger.debug("Executing split extended action.") 409 if config.options is None or config.stage is None: 410 raise ValueError("Cedar Backup configuration is not properly filled in.") 411 local = LocalConfig(xmlPath=configPath) 412 dailyDirs = findDailyDirs(config.stage.targetDir, SPLIT_INDICATOR) 413 for dailyDir in dailyDirs: 414 _splitDailyDir(dailyDir, local.split.sizeLimit, local.split.splitSize, 415 config.options.backupUser, config.options.backupGroup) 416 writeIndicatorFile(dailyDir, SPLIT_INDICATOR, config.options.backupUser, config.options.backupGroup) 417 logger.info("Executed the split extended action successfully.")
    418
    419 420 ############################## 421 # _splitDailyDir() function 422 ############################## 423 424 -def _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup):
    425 """ 426 Splits large files in a daily staging directory. 427 428 Files that match INDICATOR_PATTERNS (i.e. C{"cback.store"}, 429 C{"cback.stage"}, etc.) are assumed to be indicator files and are ignored. 430 All other files are split. 431 432 @param dailyDir: Daily directory to encrypt 433 @param sizeLimit: Size limit, in bytes 434 @param splitSize: Split size, in bytes 435 @param backupUser: User that target files should be owned by 436 @param backupGroup: Group that target files should be owned by 437 438 @raise ValueError: If the encrypt mode is not supported. 439 @raise ValueError: If the daily staging directory does not exist. 440 """ 441 logger.debug("Begin splitting contents of [%s].", dailyDir) 442 fileList = getBackupFiles(dailyDir) # ignores indicator files 443 for path in fileList: 444 size = float(os.stat(path).st_size) 445 if size > sizeLimit: 446 _splitFile(path, splitSize, backupUser, backupGroup, removeSource=True) 447 logger.debug("Completed splitting contents of [%s].", dailyDir)
    448
    449 450 ######################## 451 # _splitFile() function 452 ######################## 453 454 -def _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False):
    455 """ 456 Splits the source file into chunks of the indicated size. 457 458 The split files will be owned by the indicated backup user and group. If 459 C{removeSource} is C{True}, then the source file will be removed after it is 460 successfully split. 461 462 @param sourcePath: Absolute path of the source file to split 463 @param splitSize: Encryption mode (only "gpg" is allowed) 464 @param backupUser: User that target files should be owned by 465 @param backupGroup: Group that target files should be owned by 466 @param removeSource: Indicates whether to remove the source file 467 468 @raise IOError: If there is a problem accessing, splitting or removing the source file. 469 """ 470 cwd = os.getcwd() 471 try: 472 if not os.path.exists(sourcePath): 473 raise ValueError("Source path [%s] does not exist." % sourcePath) 474 dirname = os.path.dirname(sourcePath) 475 filename = os.path.basename(sourcePath) 476 prefix = "%s_" % filename 477 bytes = int(splitSize.bytes) # pylint: disable=W0622 478 os.chdir(dirname) # need to operate from directory that we want files written to 479 command = resolveCommand(SPLIT_COMMAND) 480 args = [ "--verbose", "--numeric-suffixes", "--suffix-length=5", "--bytes=%d" % bytes, filename, prefix, ] 481 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=False) 482 if result != 0: 483 raise IOError("Error [%d] calling split for [%s]." % (result, sourcePath)) 484 pattern = re.compile(r"(creating file [`'])(%s)(.*)(')" % prefix) 485 match = pattern.search(output[-1:][0]) 486 if match is None: 487 raise IOError("Unable to parse output from split command.") 488 value = int(match.group(3).strip()) 489 for index in range(0, value): 490 path = "%s%05d" % (prefix, index) 491 if not os.path.exists(path): 492 raise IOError("After call to split, expected file [%s] does not exist." % path) 493 changeOwnership(path, backupUser, backupGroup) 494 if removeSource: 495 if os.path.exists(sourcePath): 496 try: 497 os.remove(sourcePath) 498 logger.debug("Completed removing old file [%s].", sourcePath) 499 except: 500 raise IOError("Failed to remove file [%s] after splitting it." % (sourcePath)) 501 finally: 502 os.chdir(cwd)
    503

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mysql-module.html0000664000175000017500000006045612642035643027334 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mysql
    Package CedarBackup2 :: Package extend :: Module mysql
    [hide private]
    [frames] | no frames]

    Module mysql

    source code

    Provides an extension to back up MySQL databases.

    This is a Cedar Backup extension used to back up MySQL databases via the Cedar Backup command line. It requires a new configuration section <mysql> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. Note that this code always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I'll update this extension or provide another.

    The extension assumes that all configured databases can be backed up by a single user. Often, the "root" database user will be used. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) various databases as needed. This second option is probably the best choice.

    The extension accepts a username and password in configuration. However, you probably do not want to provide those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf:

      [mysqldump]
      user     = root
      password = <secret>
    

    Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600).


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      MysqlConfig
    Class representing MySQL configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the MySQL backup action.
    source code
     
    _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None)
    Backs up an individual MySQL database, or all databases.
    source code
     
    _getOutputFile(targetDir, database, compressMode)
    Opens the output file used for saving the MySQL dump.
    source code
     
    backupDatabase(user, password, backupFile, database=None)
    Backs up an individual MySQL database, or all databases.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.mysql")
      MYSQLDUMP_COMMAND = ['mysqldump']
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the MySQL backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None)

    source code 

    Backs up an individual MySQL database, or all databases.

    This internal method wraps the public method and adds some functionality, like figuring out a filename, etc.

    Parameters:
    • targetDir - Directory into which backups should be written.
    • compressMode - Compress mode to be used for backed-up files.
    • user - User to use for connecting to the database (if any).
    • password - Password associated with user (if any).
    • backupUser - User to own resulting file.
    • backupGroup - Group to own resulting file.
    • database - Name of database, or None for all databases.
    Returns:
    Name of the generated backup file.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the MySQL dump.

    _getOutputFile(targetDir, database, compressMode)

    source code 

    Opens the output file used for saving the MySQL dump.

    The filename is either "mysqldump.txt" or "mysqldump-<database>.txt". The ".bz2" extension is added if compress is True.

    Parameters:
    • targetDir - Target directory to write file in.
    • database - Name of the database (if any)
    • compressMode - Compress mode to be used for backed-up files.
    Returns:
    Tuple of (Output file object, filename)

    backupDatabase(user, password, backupFile, database=None)

    source code 

    Backs up an individual MySQL database, or all databases.

    This function backs up either a named local MySQL database or all local MySQL databases, using the passed-in user and password (if provided) for connectivity. This function call always results a full backup. There is no facility for incremental backups.

    The backup data will be written into the passed-in backup file. Normally, this would be an object as returned from open(), but it is possible to use something like a GzipFile to write compressed output. The caller is responsible for closing the passed-in backup file.

    Often, the "root" database user will be used when backing up all databases. An alternative is to create a separate MySQL "backup" user and grant that user rights to read (but not write) all of the databases that will be backed up.

    This function accepts a username and password. However, you probably do not want to pass those values in. This is because they will be provided to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, this would be done by putting a stanza like this in /root/.my.cnf, to provide mysqldump with the root database username and its password:

      [mysqldump]
      user     = root
      password = <secret>
    

    If you are executing this function as some system user other than root, then the .my.cnf file would be placed in the home directory of that user. In either case, make sure to set restrictive permissions (typically, mode 0600) on .my.cnf to make sure that other users cannot read the file.

    Parameters:
    • user (String representing MySQL username, or None) - User to use for connecting to the database (if any)
    • password (String representing MySQL password, or None) - Password associated with user (if any)
    • backupFile (Python file object as from open() or file().) - File use for writing backup.
    • database (String representing database name, or None for all databases.) - Name of the database to be backed up.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the MySQL dump.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.PostActionHook-class.html0000664000175000017500000003250612642035644030665 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PostActionHook
    Package CedarBackup2 :: Module config :: Class PostActionHook
    [hide private]
    [frames] | no frames]

    Class PostActionHook

    source code

    object --+    
             |    
    ActionHook --+
                 |
                PostActionHook
    

    Class representing a pre-action hook associated with an action.

    A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a post-action hook is executed after the named action.

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string consisting of lower-case letters and digits.
    • The shell command must be a non-empty string.

    The internal before instance variable is always set to True in this class.

    Instance Methods [hide private]
     
    __init__(self, action=None, command=None)
    Constructor for the PostActionHook class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from ActionHook: __str__, __cmp__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from ActionHook: action, command, before, after

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, action=None, command=None)
    (Constructor)

    source code 

    Constructor for the PostActionHook class.

    Parameters:
    • action - Action this hook is associated with
    • command - Shell command to execute
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.subversion.BDBRepository-class.html0000664000175000017500000003377612642035644032702 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.BDBRepository
    Package CedarBackup2 :: Package extend :: Module subversion :: Class BDBRepository
    [hide private]
    [frames] | no frames]

    Class BDBRepository

    source code

    object --+    
             |    
    Repository --+
                 |
                BDBRepository
    

    Class representing Subversion BDB (Berkeley Database) repository configuration. This object is deprecated. Use a simple Repository instead.

    Instance Methods [hide private]
     
    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    Constructor for the BDBRepository class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from Repository: __cmp__, __str__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from Repository: collectMode, compressMode, repositoryPath, repositoryType

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the BDBRepository class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • repositoryPath - Absolute path to a Subversion repository on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.validate-module.html0000664000175000017500000006736312642035643030135 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.validate
    Package CedarBackup2 :: Package actions :: Module validate
    [hide private]
    [frames] | no frames]

    Module validate

    source code

    Implements the standard 'validate' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeValidate(configPath, options, config)
    Executes the validate action.
    source code
     
    _checkDir(path, writable, logfunc, prefix)
    Checks that the indicated directory is OK.
    source code
     
    _validateReference(config, logfunc)
    Execute runtime validations on reference configuration.
    source code
     
    _validateOptions(config, logfunc)
    Execute runtime validations on options configuration.
    source code
     
    _validateCollect(config, logfunc)
    Execute runtime validations on collect configuration.
    source code
     
    _validateStage(config, logfunc)
    Execute runtime validations on stage configuration.
    source code
     
    _validateStore(config, logfunc)
    Execute runtime validations on store configuration.
    source code
     
    _validatePurge(config, logfunc)
    Execute runtime validations on purge configuration.
    source code
     
    _validateExtensions(config, logfunc)
    Execute runtime validations on extensions configuration.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.validate")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeValidate(configPath, options, config)

    source code 

    Executes the validate action.

    This action validates each of the individual sections in the config file. This is a "runtime" validation. The config file itself is already valid in a structural sense, so what we check here that is that we can actually use the configuration without any problems.

    There's a separate validation function for each of the configuration sections. Each validation function returns a true/false indication for whether configuration was valid, and then logs any configuration problems it finds. This way, one pass over configuration indicates most or all of the obvious problems, rather than finding just one problem at a time.

    Any reported problems will be logged at the ERROR level normally, or at the INFO level if the quiet flag is enabled.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - If some configuration value is invalid.

    _checkDir(path, writable, logfunc, prefix)

    source code 

    Checks that the indicated directory is OK.

    The path must exist, must be a directory, must be readable and executable, and must optionally be writable.

    Parameters:
    • path - Path to check.
    • writable - Check that path is writable.
    • logfunc - Function to use for logging errors.
    • prefix - Prefix to use on logged errors.
    Returns:
    True if the directory is OK, False otherwise.

    _validateReference(config, logfunc)

    source code 

    Execute runtime validations on reference configuration.

    We only validate that reference configuration exists at all.

    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, false otherwise.

    _validateOptions(config, logfunc)

    source code 

    Execute runtime validations on options configuration.

    The following validations are enforced:

    • The options section must exist
    • The working directory must exist and must be writable
    • The backup user and backup group must exist
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, false otherwise.

    _validateCollect(config, logfunc)

    source code 

    Execute runtime validations on collect configuration.

    The following validations are enforced:

    • The target directory must exist and must be writable
    • Each of the individual collect directories must exist and must be readable
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, false otherwise.

    _validateStage(config, logfunc)

    source code 

    Execute runtime validations on stage configuration.

    The following validations are enforced:

    • The target directory must exist and must be writable
    • Each local peer's collect directory must exist and must be readable
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    Note: We currently do not validate anything having to do with remote peers, since we don't have a straightforward way of doing it. It would require adding an rsh command rather than just an rcp command to configuration, and that just doesn't seem worth it right now.

    _validateStore(config, logfunc)

    source code 

    Execute runtime validations on store configuration.

    The following validations are enforced:

    • The source directory must exist and must be readable
    • The backup device (path and SCSI device) must be valid
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    _validatePurge(config, logfunc)

    source code 

    Execute runtime validations on purge configuration.

    The following validations are enforced:

    • Each purge directory must exist and must be writable
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    _validateExtensions(config, logfunc)

    source code 

    Execute runtime validations on extensions configuration.

    The following validations are enforced:

    • Each indicated extension function must exist.
    Parameters:
    • config - Program configuration.
    • logfunc - Function to use for logging errors
    Returns:
    True if configuration is valid, False otherwise.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.filesystem.PurgeItemList-class.html0000664000175000017500000006504112642035644031435 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem.PurgeItemList
    Package CedarBackup2 :: Module filesystem :: Class PurgeItemList
    [hide private]
    [frames] | no frames]

    Class PurgeItemList

    source code

    object --+        
             |        
          list --+    
                 |    
    FilesystemList --+
                     |
                    PurgeItemList
    

    List of files and directories to be purged.

    A PurgeItemList is a FilesystemList containing a list of files and directories to be purged. On top of the generic functionality provided by FilesystemList, this class adds functionality to remove items that are too young to be purged, and to actually remove each item in the list from the filesystem.

    The other main difference is that when you add a directory's contents to a purge item list, the directory itself is not added to the list. This way, if someone asks to purge within in /opt/backup/collect, that directory doesn't get removed once all of the files within it is gone.

    Instance Methods [hide private]
    new empty list
    __init__(self)
    Initializes a list with no configured exclusions.
    source code
     
    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)
    Adds the contents of a directory to the list.
    source code
     
    removeYoungFiles(self, daysOld)
    Removes from the list files younger than a certain age (in days).
    source code
     
    purgeItems(self)
    Purges all items in the list.
    source code

    Inherited from FilesystemList: addDir, addFile, normalize, removeDirs, removeFiles, removeInvalid, removeLinks, removeMatch, verify

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from FilesystemList: excludeBasenamePatterns, excludeDirs, excludeFiles, excludeLinks, excludePaths, excludePatterns, ignoreFile

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Initializes a list with no configured exclusions.

    Returns: new empty list
    Overrides: object.__init__

    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)

    source code 

    Adds the contents of a directory to the list.

    The path must exist and must be a directory or a link to a directory. The contents of the directory (but not the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its contents to be added, then pass in recursive=False.

    Parameters:
    • path (String representing a path on disk) - Directory path whose contents should be added to the list
    • recursive (Boolean value) - Indicates whether directory contents should be added recursively.
    • addSelf - Ignored in this subclass.
    • linkDepth (Integer value, where zero means not to follow any soft links) - Depth of soft links that should be followed
    • dereference (Boolean value) - Indicates whether soft links, if followed, should be dereferenced
    Returns:
    Number of items recursively added to the list
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.
    Overrides: FilesystemList.addDirContents
    Notes:
    • If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list.
    • If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links within the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc.
    • Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored.
    • The excludeDirs flag only controls whether any given soft link path itself is added to the list once it has been discovered. It does not modify any behavior related to directory recursion.
    • The excludeDirs flag only controls whether any given directory path itself is added to the list once it has been discovered. It does not modify any behavior related to directory recursion.
    • If you call this method on a link to a directory that link will never be dereferenced (it may, however, be followed).

    removeYoungFiles(self, daysOld)

    source code 

    Removes from the list files younger than a certain age (in days).

    Any file whose "age" in days is less than (<) the value of the daysOld parameter will be removed from the list so that it will not be purged later when purgeItems is called. Directories and soft links will be ignored.

    The "age" of a file is the amount of time since the file was last used, per the most recent of the file's st_atime and st_mtime values.

    Parameters:
    • daysOld (Integer value >= 0.) - Minimum age of files that are to be kept in the list.
    Returns:
    Number of entries removed

    Note: Some people find the "sense" of this method confusing or "backwards". Keep in mind that this method is used to remove items from the list, not from the filesystem! It removes from the list those items that you would not want to purge because they are too young. As an example, passing in daysOld of zero (0) would remove from the list no files, which would result in purging all of the files later. I would be happy to make a synonym of this method with an easier-to-understand "sense", if someone can suggest one.

    purgeItems(self)

    source code 

    Purges all items in the list.

    Every item in the list will be purged. Directories in the list will not be purged recursively, and hence will only be removed if they are empty. Errors will be ignored.

    To faciliate easy removal of directories that will end up being empty, the delete process happens in two passes: files first (including soft links), then directories.

    Returns:
    Tuple containing count of (files, dirs) removed

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.util-module.html0000664000175000017500000006554512642035643027321 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.util
    Package CedarBackup2 :: Package actions :: Module util
    [hide private]
    [frames] | no frames]

    Module util

    source code

    Implements action-related utilities


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    findDailyDirs(stagingDir, indicatorFile)
    Returns a list of all daily staging directories that do not contain the indicated indicator file.
    source code
     
    createWriter(config)
    Creates a writer object based on current configuration.
    source code
     
    writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup)
    Writes an indicator file into a target directory.
    source code
     
    getBackupFiles(targetDir)
    Gets a list of backup files in a target directory.
    source code
     
    checkMediaState(storeConfig)
    Checks state of the media in the backup device to confirm whether it has been initialized for use with Cedar Backup.
    source code
     
    initializeMediaState(config)
    Initializes state of the media in the backup device so Cedar Backup can recognize it.
    source code
     
    buildMediaLabel()
    Builds a media label to be used on Cedar Backup media.
    source code
     
    _getDeviceType(config)
    Gets the device type that should be used for storing.
    source code
     
    _getMediaType(config)
    Gets the media type that should be used for storing.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.util")
      MEDIA_LABEL_PREFIX = 'CEDAR BACKUP'
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    findDailyDirs(stagingDir, indicatorFile)

    source code 

    Returns a list of all daily staging directories that do not contain the indicated indicator file.

    Parameters:
    • stagingDir - Configured staging directory (config.targetDir)
    Returns:
    List of absolute paths to daily staging directories.

    createWriter(config)

    source code 

    Creates a writer object based on current configuration.

    This function creates and returns a writer based on configuration. This is done to abstract action functionality from knowing what kind of writer is in use. Since all writers implement the same interface, there's no need for actions to care which one they're working with.

    Currently, the cdwriter and dvdwriter device types are allowed. An exception will be raised if any other device type is used.

    This function also checks to make sure that the device isn't mounted before creating a writer object for it. Experience shows that sometimes if the device is mounted, we have problems with the backup. We may as well do the check here first, before instantiating the writer.

    Parameters:
    • config - Config object.
    Returns:
    Writer that can be used to write a directory to some media.
    Raises:
    • ValueError - If there is a problem getting the writer.
    • IOError - If there is a problem creating the writer object.

    writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup)

    source code 

    Writes an indicator file into a target directory.

    Parameters:
    • targetDir - Target directory in which to write indicator
    • indicatorFile - Name of the indicator file
    • backupUser - User that indicator file should be owned by
    • backupGroup - Group that indicator file should be owned by
    Raises:
    • IOException - If there is a problem writing the indicator file

    getBackupFiles(targetDir)

    source code 

    Gets a list of backup files in a target directory.

    Files that match INDICATOR_PATTERN (i.e. "cback.store", "cback.stage", etc.) are assumed to be indicator files and are ignored.

    Parameters:
    • targetDir - Directory to look in
    Returns:
    List of backup files in the directory
    Raises:
    • ValueError - If the target directory does not exist

    checkMediaState(storeConfig)

    source code 

    Checks state of the media in the backup device to confirm whether it has been initialized for use with Cedar Backup.

    We can tell whether the media has been initialized by looking at its media label. If the media label starts with MEDIA_LABEL_PREFIX, then it has been initialized.

    The check varies depending on whether the media is rewritable or not. For non-rewritable media, we also accept a None media label, since this kind of media cannot safely be initialized.

    Parameters:
    • storeConfig - Store configuration
    Raises:
    • ValueError - If media is not initialized.

    initializeMediaState(config)

    source code 

    Initializes state of the media in the backup device so Cedar Backup can recognize it.

    This is done by writing an mostly-empty image (it contains a "Cedar Backup" directory) to the media with a known media label.

    Parameters:
    • config - Cedar Backup configuration
    Raises:
    • ValueError - If media could not be initialized.
    • ValueError - If the configured media type is not rewritable

    Note: Only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup.

    buildMediaLabel()

    source code 

    Builds a media label to be used on Cedar Backup media.

    Returns:
    Media label as a string.

    _getDeviceType(config)

    source code 

    Gets the device type that should be used for storing.

    Use the configured device type if not None, otherwise use config.DEFAULT_DEVICE_TYPE.

    Parameters:
    • config - Config object.
    Returns:
    Device type to be used.

    _getMediaType(config)

    source code 

    Gets the media type that should be used for storing.

    Use the configured media type if not None, otherwise use DEFAULT_MEDIA_TYPE.

    Once we figure out what configuration value to use, we return a media type value that is valid in one of the supported writers:

      MEDIA_CDR_74
      MEDIA_CDRW_74
      MEDIA_CDR_80
      MEDIA_CDRW_80
      MEDIA_DVDPLUSR
      MEDIA_DVDPLUSRW
    
    Parameters:
    • config - Config object.
    Returns:
    Media type to be used as a writer media type value.
    Raises:
    • ValueError - If the media type is not valid.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.cli._ManagedActionItem-class.html0000664000175000017500000003743112642035643030734 0ustar pronovicpronovic00000000000000 CedarBackup2.cli._ManagedActionItem
    Package CedarBackup2 :: Module cli :: Class _ManagedActionItem
    [hide private]
    [frames] | no frames]

    Class _ManagedActionItem

    source code

    object --+
             |
            _ManagedActionItem
    

    Class representing a single action to be executed on a managed peer.

    This class represents a single named action to be executed, and understands how to execute that action.

    Actions to be executed on a managed peer rely on peer configuration and on the full-backup flag. All other configuration takes place on the remote peer itself.


    Note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type.

    Instance Methods [hide private]
     
    __init__(self, index, name, remotePeers)
    Default constructor.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    executeAction(self, configPath, options, config)
    Executes the managed action associated with an item.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]
      SORT_ORDER = 1
    Defines a sort order to order properly between types.
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, index, name, remotePeers)
    (Constructor)

    source code 

    Default constructor.

    Parameters:
    • index - Index of the item (or None).
    • name - Name of the action that is being executed.
    • remotePeers - List of remote peers on which to execute the action.
    Overrides: object.__init__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. The only thing we compare is the item's index.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    executeAction(self, configPath, options, config)

    source code 

    Executes the managed action associated with an item.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action.
    • config - Parsed configuration to be passed to action.
    Raises:
    • Exception - If there is a problem executing the action.
    Notes:
    • Only options.full is actually used. The rest of the arguments exist to satisfy the ActionItem iterface.
    • Errors here result in a message logged to ERROR, but no thrown exception. The analogy is the stage action where a problem with one host should not kill the entire backup. Since we're logging an error, the administrator will get an email.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.purge-pysrc.html0000664000175000017500000007642012642035644027334 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.purge
    Package CedarBackup2 :: Package actions :: Module purge
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.purge

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Implements the standard 'purge' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'purge' action. 
     40  @sort: executePurge 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import logging 
     51   
     52  # Cedar Backup modules 
     53  from CedarBackup2.filesystem import PurgeItemList 
     54   
     55   
     56  ######################################################################## 
     57  # Module-wide constants and variables 
     58  ######################################################################## 
     59   
     60  logger = logging.getLogger("CedarBackup2.log.actions.purge") 
     61   
     62   
     63  ######################################################################## 
     64  # Public functions 
     65  ######################################################################## 
     66   
     67  ########################## 
     68  # executePurge() function 
     69  ########################## 
     70   
    
    71 -def executePurge(configPath, options, config):
    72 """ 73 Executes the purge backup action. 74 75 For each configured directory, we create a purge item list, remove from the 76 list anything that's younger than the configured retain days value, and then 77 purge from the filesystem what's left. 78 79 @param configPath: Path to configuration file on disk. 80 @type configPath: String representing a path on disk. 81 82 @param options: Program command-line options. 83 @type options: Options object. 84 85 @param config: Program configuration. 86 @type config: Config object. 87 88 @raise ValueError: Under many generic error conditions 89 """ 90 logger.debug("Executing the 'purge' action.") 91 if config.options is None or config.purge is None: 92 raise ValueError("Purge configuration is not properly filled in.") 93 if config.purge.purgeDirs is not None: 94 for purgeDir in config.purge.purgeDirs: 95 purgeList = PurgeItemList() 96 purgeList.addDirContents(purgeDir.absolutePath) # add everything within directory 97 purgeList.removeYoungFiles(purgeDir.retainDays) # remove young files *from the list* so they won't be purged 98 purgeList.purgeItems() # remove remaining items from the filesystem 99 logger.info("Executed the 'purge' action successfully.")
    100

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.tools.span-pysrc.html0000664000175000017500000067517712642035645026672 0ustar pronovicpronovic00000000000000 CedarBackup2.tools.span
    Package CedarBackup2 :: Package tools :: Module span
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.tools.span

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Spans staged data among multiple discs 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Notes 
     36  ######################################################################## 
     37   
     38  """ 
     39  Spans staged data among multiple discs 
     40   
     41  This is the Cedar Backup span tool.  It is intended for use by people who stage 
     42  more data than can fit on a single disc.  It allows a user to split staged data 
     43  among more than one disc.  It can't be an extension because it requires user 
     44  input when switching media. 
     45   
     46  Most configuration is taken from the Cedar Backup configuration file, 
     47  specifically the store section.  A few pieces of configuration are taken 
     48  directly from the user. 
     49   
     50  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     51  """ 
     52   
     53  ######################################################################## 
     54  # Imported modules and constants 
     55  ######################################################################## 
     56   
     57  # System modules 
     58  import sys 
     59  import os 
     60  import logging 
     61  import tempfile 
     62   
     63  # Cedar Backup modules 
     64  from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT 
     65  from CedarBackup2.util import displayBytes, convertSize, mount, unmount 
     66  from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES 
     67  from CedarBackup2.config import Config 
     68  from CedarBackup2.filesystem import BackupFileList, compareDigestMaps, normalizeDir 
     69  from CedarBackup2.cli import Options, setupLogging, setupPathResolver 
     70  from CedarBackup2.cli import DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE 
     71  from CedarBackup2.actions.constants import STORE_INDICATOR 
     72  from CedarBackup2.actions.util import createWriter 
     73  from CedarBackup2.actions.store import writeIndicatorFile 
     74  from CedarBackup2.actions.util import findDailyDirs 
     75  from CedarBackup2.util import Diagnostics 
     76   
     77   
     78  ######################################################################## 
     79  # Module-wide constants and variables 
     80  ######################################################################## 
     81   
     82  logger = logging.getLogger("CedarBackup2.log.tools.span") 
     83   
     84   
     85  ####################################################################### 
     86  # SpanOptions class 
     87  ####################################################################### 
     88   
    
    89 -class SpanOptions(Options):
    90 91 """ 92 Tool-specific command-line options. 93 94 Most of the cback command-line options are exactly what we need here -- 95 logfile path, permissions, verbosity, etc. However, we need to make a few 96 tweaks since we don't accept any actions. 97 98 Also, a few extra command line options that we accept are really ignored 99 underneath. I just don't care about that for a tool like this. 100 """ 101
    102 - def validate(self):
    103 """ 104 Validates command-line options represented by the object. 105 There are no validations here, because we don't use any actions. 106 @raise ValueError: If one of the validations fails. 107 """ 108 pass
    109 110 111 ####################################################################### 112 # Public functions 113 ####################################################################### 114 115 ################# 116 # cli() function 117 ################# 118
    119 -def cli():
    120 """ 121 Implements the command-line interface for the C{cback-span} script. 122 123 Essentially, this is the "main routine" for the cback-span script. It does 124 all of the argument processing for the script, and then also implements the 125 tool functionality. 126 127 This function looks pretty similiar to C{CedarBackup2.cli.cli()}. It's not 128 easy to refactor this code to make it reusable and also readable, so I've 129 decided to just live with the duplication. 130 131 A different error code is returned for each type of failure: 132 133 - C{1}: The Python interpreter version is < 2.7 134 - C{2}: Error processing command-line arguments 135 - C{3}: Error configuring logging 136 - C{4}: Error parsing indicated configuration file 137 - C{5}: Backup was interrupted with a CTRL-C or similar 138 - C{6}: Error executing other parts of the script 139 140 @note: This script uses print rather than logging to the INFO level, because 141 it is interactive. Underlying Cedar Backup functionality uses the logging 142 mechanism exclusively. 143 144 @return: Error code as described above. 145 """ 146 try: 147 if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 7]: 148 sys.stderr.write("Python 2 version 2.7 or greater required.\n") 149 return 1 150 except: 151 # sys.version_info isn't available before 2.0 152 sys.stderr.write("Python 2 version 2.7 or greater required.\n") 153 return 1 154 155 try: 156 options = SpanOptions(argumentList=sys.argv[1:]) 157 except Exception, e: 158 _usage() 159 sys.stderr.write(" *** Error: %s\n" % e) 160 return 2 161 162 if options.help: 163 _usage() 164 return 0 165 if options.version: 166 _version() 167 return 0 168 if options.diagnostics: 169 _diagnostics() 170 return 0 171 172 if options.stacktrace: 173 logfile = setupLogging(options) 174 else: 175 try: 176 logfile = setupLogging(options) 177 except Exception as e: 178 sys.stderr.write("Error setting up logging: %s\n" % e) 179 return 3 180 181 logger.info("Cedar Backup 'span' utility run started.") 182 logger.info("Options were [%s]", options) 183 logger.info("Logfile is [%s]", logfile) 184 185 if options.config is None: 186 logger.debug("Using default configuration file.") 187 configPath = DEFAULT_CONFIG 188 else: 189 logger.debug("Using user-supplied configuration file.") 190 configPath = options.config 191 192 try: 193 logger.info("Configuration path is [%s]", configPath) 194 config = Config(xmlPath=configPath) 195 setupPathResolver(config) 196 except Exception, e: 197 logger.error("Error reading or handling configuration: %s", e) 198 logger.info("Cedar Backup 'span' utility run completed with status 4.") 199 return 4 200 201 if options.stacktrace: 202 _executeAction(options, config) 203 else: 204 try: 205 _executeAction(options, config) 206 except KeyboardInterrupt: 207 logger.error("Backup interrupted.") 208 logger.info("Cedar Backup 'span' utility run completed with status 5.") 209 return 5 210 except Exception, e: 211 logger.error("Error executing backup: %s", e) 212 logger.info("Cedar Backup 'span' utility run completed with status 6.") 213 return 6 214 215 logger.info("Cedar Backup 'span' utility run completed with status 0.") 216 return 0
    217 218 219 ####################################################################### 220 # Utility functions 221 ####################################################################### 222 223 #################### 224 # _usage() function 225 #################### 226
    227 -def _usage(fd=sys.stderr):
    228 """ 229 Prints usage information for the cback script. 230 @param fd: File descriptor used to print information. 231 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 232 """ 233 fd.write("\n") 234 fd.write(" Usage: cback-span [switches]\n") 235 fd.write("\n") 236 fd.write(" Cedar Backup 'span' tool.\n") 237 fd.write("\n") 238 fd.write(" This Cedar Backup utility spans staged data between multiple discs.\n") 239 fd.write(" It is a utility, not an extension, and requires user interaction.\n") 240 fd.write("\n") 241 fd.write(" The following switches are accepted, mostly to set up underlying\n") 242 fd.write(" Cedar Backup functionality:\n") 243 fd.write("\n") 244 fd.write(" -h, --help Display this usage/help listing\n") 245 fd.write(" -V, --version Display version information\n") 246 fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") 247 fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) 248 fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) 249 fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) 250 fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) 251 fd.write(" -O, --output Record some sub-command (i.e. tar) output to the log\n") 252 fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") 253 fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") 254 fd.write("\n")
    255 256 257 ###################### 258 # _version() function 259 ###################### 260
    261 -def _version(fd=sys.stdout):
    262 """ 263 Prints version information for the cback script. 264 @param fd: File descriptor used to print information. 265 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 266 """ 267 fd.write("\n") 268 fd.write(" Cedar Backup 'span' tool.\n") 269 fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) 270 fd.write("\n") 271 fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) 272 fd.write(" See CREDITS for a list of included code and other contributors.\n") 273 fd.write(" This is free software; there is NO warranty. See the\n") 274 fd.write(" GNU General Public License version 2 for copying conditions.\n") 275 fd.write("\n") 276 fd.write(" Use the --help option for usage information.\n") 277 fd.write("\n")
    278 279 280 ########################## 281 # _diagnostics() function 282 ########################## 283
    284 -def _diagnostics(fd=sys.stdout):
    285 """ 286 Prints runtime diagnostics information. 287 @param fd: File descriptor used to print information. 288 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 289 """ 290 fd.write("\n") 291 fd.write("Diagnostics:\n") 292 fd.write("\n") 293 Diagnostics().printDiagnostics(fd=fd, prefix=" ") 294 fd.write("\n")
    295 296 297 ############################ 298 # _executeAction() function 299 ############################ 300
    301 -def _executeAction(options, config):
    302 """ 303 Implements the guts of the cback-span tool. 304 305 @param options: Program command-line options. 306 @type options: SpanOptions object. 307 308 @param config: Program configuration. 309 @type config: Config object. 310 311 @raise Exception: Under many generic error conditions 312 """ 313 print "" 314 print "================================================" 315 print " Cedar Backup 'span' tool" 316 print "================================================" 317 print "" 318 print "This the Cedar Backup span tool. It is used to split up staging" 319 print "data when that staging data does not fit onto a single disc." 320 print "" 321 print "This utility operates using Cedar Backup configuration. Configuration" 322 print "specifies which staging directory to look at and which writer device" 323 print "and media type to use." 324 print "" 325 if not _getYesNoAnswer("Continue?", default="Y"): 326 return 327 print "===" 328 329 print "" 330 print "Cedar Backup store configuration looks like this:" 331 print "" 332 print " Source Directory...: %s" % config.store.sourceDir 333 print " Media Type.........: %s" % config.store.mediaType 334 print " Device Type........: %s" % config.store.deviceType 335 print " Device Path........: %s" % config.store.devicePath 336 print " Device SCSI ID.....: %s" % config.store.deviceScsiId 337 print " Drive Speed........: %s" % config.store.driveSpeed 338 print " Check Data Flag....: %s" % config.store.checkData 339 print " No Eject Flag......: %s" % config.store.noEject 340 print "" 341 if not _getYesNoAnswer("Is this OK?", default="Y"): 342 return 343 print "===" 344 345 (writer, mediaCapacity) = _getWriter(config) 346 347 print "" 348 print "Please wait, indexing the source directory (this may take a while)..." 349 (dailyDirs, fileList) = _findDailyDirs(config.store.sourceDir) 350 print "===" 351 352 print "" 353 print "The following daily staging directories have not yet been written to disc:" 354 print "" 355 for dailyDir in dailyDirs: 356 print " %s" % dailyDir 357 358 totalSize = fileList.totalSize() 359 print "" 360 print "The total size of the data in these directories is %s." % displayBytes(totalSize) 361 print "" 362 if not _getYesNoAnswer("Continue?", default="Y"): 363 return 364 print "===" 365 366 print "" 367 print "Based on configuration, the capacity of your media is %s." % displayBytes(mediaCapacity) 368 369 print "" 370 print "Since estimates are not perfect and there is some uncertainly in" 371 print "media capacity calculations, it is good to have a \"cushion\"," 372 print "a percentage of capacity to set aside. The cushion reduces the" 373 print "capacity of your media, so a 1.5% cushion leaves 98.5% remaining." 374 print "" 375 cushion = _getFloat("What cushion percentage?", default=4.5) 376 print "===" 377 378 realCapacity = ((100.0 - cushion)/100.0) * mediaCapacity 379 minimumDiscs = (totalSize/realCapacity) + 1 380 print "" 381 print "The real capacity, taking into account the %.2f%% cushion, is %s." % (cushion, displayBytes(realCapacity)) 382 print "It will take at least %d disc(s) to store your %s of data." % (minimumDiscs, displayBytes(totalSize)) 383 print "" 384 if not _getYesNoAnswer("Continue?", default="Y"): 385 return 386 print "===" 387 388 happy = False 389 while not happy: 390 print "" 391 print "Which algorithm do you want to use to span your data across" 392 print "multiple discs?" 393 print "" 394 print "The following algorithms are available:" 395 print "" 396 print " first....: The \"first-fit\" algorithm" 397 print " best.....: The \"best-fit\" algorithm" 398 print " worst....: The \"worst-fit\" algorithm" 399 print " alternate: The \"alternate-fit\" algorithm" 400 print "" 401 print "If you don't like the results you will have a chance to try a" 402 print "different one later." 403 print "" 404 algorithm = _getChoiceAnswer("Which algorithm?", "worst", [ "first", "best", "worst", "alternate", ]) 405 print "===" 406 407 print "" 408 print "Please wait, generating file lists (this may take a while)..." 409 spanSet = fileList.generateSpan(capacity=realCapacity, algorithm="%s_fit" % algorithm) 410 print "===" 411 412 print "" 413 print "Using the \"%s-fit\" algorithm, Cedar Backup can split your data" % algorithm 414 print "into %d discs." % len(spanSet) 415 print "" 416 counter = 0 417 for item in spanSet: 418 counter += 1 419 print "Disc %d: %d files, %s, %.2f%% utilization" % (counter, len(item.fileList), 420 displayBytes(item.size), item.utilization) 421 print "" 422 if _getYesNoAnswer("Accept this solution?", default="Y"): 423 happy = True 424 print "===" 425 426 counter = 0 427 for spanItem in spanSet: 428 counter += 1 429 if counter == 1: 430 print "" 431 _getReturn("Please place the first disc in your backup device.\nPress return when ready.") 432 print "===" 433 else: 434 print "" 435 _getReturn("Please replace the disc in your backup device.\nPress return when ready.") 436 print "===" 437 _writeDisc(config, writer, spanItem) 438 439 _writeStoreIndicator(config, dailyDirs) 440 441 print "" 442 print "Completed writing all discs."
    443 444 445 ############################ 446 # _findDailyDirs() function 447 ############################ 448
    449 -def _findDailyDirs(stagingDir):
    450 """ 451 Returns a list of all daily staging directories that have not yet been 452 stored. 453 454 The store indicator file C{cback.store} will be written to a daily staging 455 directory once that directory is written to disc. So, this function looks 456 at each daily staging directory within the configured staging directory, and 457 returns a list of those which do not contain the indicator file. 458 459 Returned is a tuple containing two items: a list of daily staging 460 directories, and a BackupFileList containing all files among those staging 461 directories. 462 463 @param stagingDir: Configured staging directory 464 465 @return: Tuple (staging dirs, backup file list) 466 """ 467 results = findDailyDirs(stagingDir, STORE_INDICATOR) 468 fileList = BackupFileList() 469 for item in results: 470 fileList.addDirContents(item) 471 return (results, fileList)
    472 473 474 ################################## 475 # _writeStoreIndicator() function 476 ################################## 477
    478 -def _writeStoreIndicator(config, dailyDirs):
    479 """ 480 Writes a store indicator file into daily directories. 481 482 @param config: Config object. 483 @param dailyDirs: List of daily directories 484 """ 485 for dailyDir in dailyDirs: 486 writeIndicatorFile(dailyDir, STORE_INDICATOR, 487 config.options.backupUser, 488 config.options.backupGroup)
    489 490 491 ######################## 492 # _getWriter() function 493 ######################## 494
    495 -def _getWriter(config):
    496 """ 497 Gets a writer and media capacity from store configuration. 498 Returned is a writer and a media capacity in bytes. 499 @param config: Cedar Backup configuration 500 @return: Tuple of (writer, mediaCapacity) 501 """ 502 writer = createWriter(config) 503 mediaCapacity = convertSize(writer.media.capacity, UNIT_SECTORS, UNIT_BYTES) 504 return (writer, mediaCapacity)
    505 506 507 ######################## 508 # _writeDisc() function 509 ######################## 510
    511 -def _writeDisc(config, writer, spanItem):
    512 """ 513 Writes a span item to disc. 514 @param config: Cedar Backup configuration 515 @param writer: Writer to use 516 @param spanItem: Span item to write 517 """ 518 print "" 519 _discInitializeImage(config, writer, spanItem) 520 _discWriteImage(config, writer) 521 _discConsistencyCheck(config, writer, spanItem) 522 print "Write process is complete." 523 print "==="
    524
    525 -def _discInitializeImage(config, writer, spanItem):
    526 """ 527 Initialize an ISO image for a span item. 528 @param config: Cedar Backup configuration 529 @param writer: Writer to use 530 @param spanItem: Span item to write 531 """ 532 complete = False 533 while not complete: 534 try: 535 print "Initializing image..." 536 writer.initializeImage(newDisc=True, tmpdir=config.options.workingDir) 537 for path in spanItem.fileList: 538 graftPoint = os.path.dirname(path.replace(config.store.sourceDir, "", 1)) 539 writer.addImageEntry(path, graftPoint) 540 complete = True 541 except KeyboardInterrupt, e: 542 raise e 543 except Exception, e: 544 logger.error("Failed to initialize image: %s", e) 545 if not _getYesNoAnswer("Retry initialization step?", default="Y"): 546 raise e 547 print "Ok, attempting retry." 548 print "===" 549 print "Completed initializing image."
    550
    551 -def _discWriteImage(config, writer):
    552 """ 553 Writes a ISO image for a span item. 554 @param config: Cedar Backup configuration 555 @param writer: Writer to use 556 """ 557 complete = False 558 while not complete: 559 try: 560 print "Writing image to disc..." 561 writer.writeImage() 562 complete = True 563 except KeyboardInterrupt, e: 564 raise e 565 except Exception, e: 566 logger.error("Failed to write image: %s", e) 567 if not _getYesNoAnswer("Retry this step?", default="Y"): 568 raise e 569 print "Ok, attempting retry." 570 _getReturn("Please replace media if needed.\nPress return when ready.") 571 print "===" 572 print "Completed writing image."
    573
    574 -def _discConsistencyCheck(config, writer, spanItem):
    575 """ 576 Run a consistency check on an ISO image for a span item. 577 @param config: Cedar Backup configuration 578 @param writer: Writer to use 579 @param spanItem: Span item to write 580 """ 581 if config.store.checkData: 582 complete = False 583 while not complete: 584 try: 585 print "Running consistency check..." 586 _consistencyCheck(config, spanItem.fileList) 587 complete = True 588 except KeyboardInterrupt, e: 589 raise e 590 except Exception, e: 591 logger.error("Consistency check failed: %s", e) 592 if not _getYesNoAnswer("Retry the consistency check?", default="Y"): 593 raise e 594 if _getYesNoAnswer("Rewrite the disc first?", default="N"): 595 print "Ok, attempting retry." 596 _getReturn("Please replace the disc in your backup device.\nPress return when ready.") 597 print "===" 598 _discWriteImage(config, writer) 599 else: 600 print "Ok, attempting retry." 601 print "===" 602 print "Completed consistency check."
    603 604 605 ############################### 606 # _consistencyCheck() function 607 ############################### 608
    609 -def _consistencyCheck(config, fileList):
    610 """ 611 Runs a consistency check against media in the backup device. 612 613 The function mounts the device at a temporary mount point in the working 614 directory, and then compares the passed-in file list's digest map with the 615 one generated from the disc. The two lists should be identical. 616 617 If no exceptions are thrown, there were no problems with the consistency 618 check. 619 620 @warning: The implementation of this function is very UNIX-specific. 621 622 @param config: Config object. 623 @param fileList: BackupFileList whose contents to check against 624 625 @raise ValueError: If the check fails 626 @raise IOError: If there is a problem working with the media. 627 """ 628 logger.debug("Running consistency check.") 629 mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) 630 try: 631 mount(config.store.devicePath, mountPoint, "iso9660") 632 discList = BackupFileList() 633 discList.addDirContents(mountPoint) 634 sourceList = BackupFileList() 635 sourceList.extend(fileList) 636 discListDigest = discList.generateDigestMap(stripPrefix=normalizeDir(mountPoint)) 637 sourceListDigest = sourceList.generateDigestMap(stripPrefix=normalizeDir(config.store.sourceDir)) 638 compareDigestMaps(sourceListDigest, discListDigest, verbose=True) 639 logger.info("Consistency check completed. No problems found.") 640 finally: 641 unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done
    642 643 644 ######################################################################### 645 # User interface utilities 646 ######################################################################## 647
    648 -def _getYesNoAnswer(prompt, default):
    649 """ 650 Get a yes/no answer from the user. 651 The default will be placed at the end of the prompt. 652 A "Y" or "y" is considered yes, anything else no. 653 A blank (empty) response results in the default. 654 @param prompt: Prompt to show. 655 @param default: Default to set if the result is blank 656 @return: Boolean true/false corresponding to Y/N 657 """ 658 if default == "Y": 659 prompt = "%s [Y/n]: " % prompt 660 else: 661 prompt = "%s [y/N]: " % prompt 662 answer = raw_input(prompt) 663 if answer in [ None, "", ]: 664 answer = default 665 if answer[0] in [ "Y", "y", ]: 666 return True 667 else: 668 return False
    669
    670 -def _getChoiceAnswer(prompt, default, validChoices):
    671 """ 672 Get a particular choice from the user. 673 The default will be placed at the end of the prompt. 674 The function loops until getting a valid choice. 675 A blank (empty) response results in the default. 676 @param prompt: Prompt to show. 677 @param default: Default to set if the result is None or blank. 678 @param validChoices: List of valid choices (strings) 679 @return: Valid choice from user. 680 """ 681 prompt = "%s [%s]: " % (prompt, default) 682 answer = raw_input(prompt) 683 if answer in [ None, "", ]: 684 answer = default 685 while answer not in validChoices: 686 print "Choice must be one of %s" % validChoices 687 answer = raw_input(prompt) 688 return answer
    689
    690 -def _getFloat(prompt, default):
    691 """ 692 Get a floating point number from the user. 693 The default will be placed at the end of the prompt. 694 The function loops until getting a valid floating point number. 695 A blank (empty) response results in the default. 696 @param prompt: Prompt to show. 697 @param default: Default to set if the result is None or blank. 698 @return: Floating point number from user 699 """ 700 prompt = "%s [%.2f]: " % (prompt, default) 701 while True: 702 answer = raw_input(prompt) 703 if answer in [ None, "" ]: 704 return default 705 else: 706 try: 707 return float(answer) 708 except ValueError: 709 print "Enter a floating point number."
    710
    711 -def _getReturn(prompt):
    712 """ 713 Get a return key from the user. 714 @param prompt: Prompt to show. 715 """ 716 raw_input(prompt)
    717 718 719 ######################################################################### 720 # Main routine 721 ######################################################################## 722 723 if __name__ == "__main__": 724 sys.exit(cli()) 725

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions.purge-module.html0000664000175000017500000000251212642035643030232 0ustar pronovicpronovic00000000000000 purge

    Module purge


    Functions

    executePurge

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.store-module.html0000664000175000017500000007166412642035643027477 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.store
    Package CedarBackup2 :: Package actions :: Module store
    [hide private]
    [frames] | no frames]

    Module store

    source code

    Implements the standard 'store' action.


    Authors:
    Kenneth J. Pronovici <pronovic@ieee.org>, Dmitry Rutsky <rutsky@inbox.ru>
    Functions [hide private]
     
    executeStore(configPath, options, config)
    Executes the store backup action.
    source code
     
    writeImage(config, newDisc, stagingDirs)
    Builds and writes an ISO image containing the indicated stage directories.
    source code
     
    writeStoreIndicator(config, stagingDirs)
    Writes a store indicator file into staging directories.
    source code
     
    consistencyCheck(config, stagingDirs)
    Runs a consistency check against media in the backup device.
    source code
     
    writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs)
    Builds and writes an ISO image containing the indicated stage directories.
    source code
     
    _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior)
    Gets a value for the newDisc flag based on blanking factor rules.
    source code
     
    _findCorrectDailyDir(options, config)
    Finds the correct daily staging directory to be written to disk.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.store")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeStore(configPath, options, config)

    source code 

    Executes the store backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are problems reading or writing files.
    Notes:
    • The rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories.
    • When the store action is complete, we will write a store indicator to the daily staging directory we used, so it's obvious that the store action has completed.

    writeImage(config, newDisc, stagingDirs)

    source code 

    Builds and writes an ISO image containing the indicated stage directories.

    The generated image will contain each of the staging directories listed in stagingDirs. The directories will be placed into the image at the root by date, so staging directory /opt/stage/2005/02/10 will be placed into the disc at /2005/02/10.

    Parameters:
    • config - Config object.
    • newDisc - Indicates whether the disc should be re-initialized
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there is a problem writing the image to disc.

    Note: This function is implemented in terms of writeImageBlankSafe. The newDisc flag is passed in for both rebuildMedia and todayIsStart.

    writeStoreIndicator(config, stagingDirs)

    source code 

    Writes a store indicator file into staging directories.

    The store indicator is written into each of the staging directories when either a store or rebuild action has written the staging directory to disc.

    Parameters:
    • config - Config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.

    consistencyCheck(config, stagingDirs)

    source code 

    Runs a consistency check against media in the backup device.

    It seems that sometimes, it's possible to create a corrupted multisession disc (i.e. one that cannot be read) although no errors were encountered while writing the disc. This consistency check makes sure that the data read from disc matches the data that was used to create the disc.

    The function mounts the device at a temporary mount point in the working directory, and then compares the indicated staging directories in the staging directory and on the media. The comparison is done via functionality in filesystem.py.

    If no exceptions are thrown, there were no problems with the consistency check. A positive confirmation of "no problems" is also written to the log with info priority.

    Parameters:
    • config - Config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - If the two directories are not equivalent.
    • IOError - If there is a problem working with the media.

    Warning: The implementation of this function is very UNIX-specific.

    writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs)

    source code 

    Builds and writes an ISO image containing the indicated stage directories.

    The generated image will contain each of the staging directories listed in stagingDirs. The directories will be placed into the image at the root by date, so staging directory /opt/stage/2005/02/10 will be placed into the disc at /2005/02/10. The media will always be written with a media label specific to Cedar Backup.

    This function is similar to writeImage, but tries to implement a smarter blanking strategy.

    First, the media is always blanked if the rebuildMedia flag is true. Then, if rebuildMedia is false, blanking behavior and todayIsStart come into effect:

      If no blanking behavior is specified, and it is the start of the week,
      the disc will be blanked
    
      If blanking behavior is specified, and either the blank mode is "daily"
      or the blank mode is "weekly" and it is the start of the week, then
      the disc will be blanked if it looks like the weekly backup will not
      fit onto the media.
    
      Otherwise, the disc will not be blanked
    

    How do we decide whether the weekly backup will fit onto the media? That is what the blanking factor is used for. The following formula is used:

      will backup fit? = (bytes available / (1 + bytes required) <= blankFactor
    

    The blanking factor will vary from setup to setup, and will probably require some experimentation to get it right.

    Parameters:
    • config - Config object.
    • rebuildMedia - Indicates whether media should be rebuilt
    • todayIsStart - Indicates whether today is the starting day of the week
    • blankBehavior - Blank behavior from configuration, or None to use default behavior
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there is a problem writing the image to disc.

    _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior)

    source code 

    Gets a value for the newDisc flag based on blanking factor rules.

    The blanking factor rules are described above by writeImageBlankSafe.

    Parameters:
    • writer - Previously configured image writer containing image entries
    • rebuildMedia - Indicates whether media should be rebuilt
    • todayIsStart - Indicates whether today is the starting day of the week
    • blankBehavior - Blank behavior from configuration, or None to use default behavior
    Returns:
    newDisc flag to be set on writer.

    _findCorrectDailyDir(options, config)

    source code 

    Finds the correct daily staging directory to be written to disk.

    In Cedar Backup v1.0, we assumed that the correct staging directory matched the current date. However, that has problems. In particular, it breaks down if collect is on one side of midnite and stage is on the other, or if certain processes span midnite.

    For v2.0, I'm trying to be smarter. I'll first check the current day. If that directory is found, it's good enough. If it's not found, I'll look for a valid directory from the day before or day after which has not yet been staged, according to the stage indicator file. The first one I find, I'll use. If I use a directory other than for the current day and config.store.warnMidnite is set, a warning will be put in the log.

    There is one exception to this rule. If the options.full flag is set, then the special "span midnite" logic will be disabled and any existing store indicator will be ignored. I did this because I think that most users who run cback --full store twice in a row expect the command to generate two identical discs. With the other rule in place, running that command twice in a row could result in an error ("no unstored directory exists") or could even cause a completely unexpected directory to be written to disc (if some previous day's contents had not yet been written).

    Parameters:
    • options - Options object.
    • config - Config object.
    Returns:
    Correct staging dir, as a dict mapping directory to date suffix.
    Raises:
    • IOError - If the staging directory cannot be found.

    Note: This code is probably longer and more verbose than it needs to be, but at least it's straightforward.


    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.writer-module.html0000664000175000017500000000211612642035643026765 0ustar pronovicpronovic00000000000000 writer

    Module writer


    Variables

    __package__

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mbox.MboxConfig-class.html0000664000175000017500000010042112642035644030772 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox.MboxConfig
    Package CedarBackup2 :: Package extend :: Module mbox :: Class MboxConfig
    [hide private]
    [frames] | no frames]

    Class MboxConfig

    source code

    object --+
             |
            MboxConfig
    

    Class representing mbox configuration.

    Mbox configuration is used for backing up mbox email files.

    The following restrictions exist on data in this class:

    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The mboxFiles list must be a list of MboxFile objects
    • The mboxDirs list must be a list of MboxDir objects

    For the mboxFiles and mboxDirs lists, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element is of the proper type.

    Unlike collect configuration, no global exclusions are allowed on this level. We only allow relative exclusions at the mbox directory level. Also, there is no configured ignore file. This is because mbox directory backups are not recursive.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None)
    Constructor for the MboxConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setMboxFiles(self, value)
    Property target used to set the mboxFiles list.
    source code
     
    _getMboxFiles(self)
    Property target used to get the mboxFiles list.
    source code
     
    _setMboxDirs(self, value)
    Property target used to set the mboxDirs list.
    source code
     
    _getMboxDirs(self)
    Property target used to get the mboxDirs list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      collectMode
    Default collect mode.
      compressMode
    Default compress mode.
      mboxFiles
    List of mbox files to back up.
      mboxDirs
    List of mbox directories to back up.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None)
    (Constructor)

    source code 

    Constructor for the MboxConfig class.

    Parameters:
    • collectMode - Default collect mode.
    • compressMode - Default compress mode.
    • mboxFiles - List of mbox files to back up
    • mboxDirs - List of mbox directories to back up
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setMboxFiles(self, value)

    source code 

    Property target used to set the mboxFiles list. Either the value must be None or each element must be an MboxFile.

    Raises:
    • ValueError - If the value is not an MboxFile

    _setMboxDirs(self, value)

    source code 

    Property target used to set the mboxDirs list. Either the value must be None or each element must be an MboxDir.

    Raises:
    • ValueError - If the value is not an MboxDir

    Property Details [hide private]

    collectMode

    Default collect mode.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Default compress mode.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    mboxFiles

    List of mbox files to back up.

    Get Method:
    _getMboxFiles(self) - Property target used to get the mboxFiles list.
    Set Method:
    _setMboxFiles(self, value) - Property target used to set the mboxFiles list.

    mboxDirs

    List of mbox directories to back up.

    Get Method:
    _getMboxDirs(self) - Property target used to get the mboxDirs list.
    Set Method:
    _setMboxDirs(self, value) - Property target used to set the mboxDirs list.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend.postgresql-module.html0000664000175000017500000000446612642035643031154 0ustar pronovicpronovic00000000000000 postgresql

    Module postgresql


    Classes

    LocalConfig
    PostgresqlConfig

    Functions

    backupDatabase
    executeAction

    Variables

    POSTGRESQLDUMPALL_COMMAND
    POSTGRESQLDUMP_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.capacity-module.html0000664000175000017500000002677312642035643027770 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity
    Package CedarBackup2 :: Package extend :: Module capacity
    [hide private]
    [frames] | no frames]

    Module capacity

    source code

    Provides an extension to check remaining media capacity.

    Some users have asked for advance warning that their media is beginning to fill up. This is an extension that checks the current capacity of the media in the writer, and prints a warning if the media is more than X% full, or has fewer than X bytes of capacity remaining.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      PercentageQuantity
    Class representing a percentage quantity.
      CapacityConfig
    Class representing capacity configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the capacity action.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.capacity")
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the capacity action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    CedarBackup2-2.26.5/doc/interface/CedarBackup2-module.html0000664000175000017500000003531212642035643024673 0ustar pronovicpronovic00000000000000 CedarBackup2
    Package CedarBackup2
    [hide private]
    [frames] | no frames]

    Package CedarBackup2

    source code

    Implements local and remote backups to CD or DVD media.

    Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources.

    Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis.

    Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.26.5/doc/interface/CedarBackup2.cli-module.html0000664000175000017500000014536612642035643025454 0ustar pronovicpronovic00000000000000 CedarBackup2.cli
    Package CedarBackup2 :: Module cli
    [hide private]
    [frames] | no frames]

    Module cli

    source code

    Provides command-line interface implementation for the cback script.

    Summary

    The functionality in this module encapsulates the command-line interface for the cback script. The cback script itself is very short, basically just an invokation of one function implemented here. That, in turn, makes it simpler to validate the command line interface (for instance, it's easier to run pychecker against a module, and unit tests are easier, too).

    The objects and functions implemented in this module are probably not useful to any code external to Cedar Backup. Anyone else implementing their own command-line interface would have to reimplement (or at least enhance) all of this anyway.

    Backwards Compatibility

    The command line interface has changed between Cedar Backup 1.x and Cedar Backup 2.x. Some new switches have been added, and the actions have become simple arguments rather than switches (which is a much more standard command line format). Old 1.x command lines are generally no longer valid.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      Options
    Class representing command-line options for the cback script.
      _ActionItem
    Class representing a single action to be executed.
      _ManagedActionItem
    Class representing a single action to be executed on a managed peer.
      _ActionSet
    Class representing a set of local actions to be executed.
    Functions [hide private]
     
    cli()
    Implements the command-line interface for the cback script.
    source code
     
    _usage(fd=sys.stdout)
    Prints usage information for the cback script.
    source code
     
    _version(fd=sys.stdout)
    Prints version information for the cback script.
    source code
     
    _diagnostics(fd=sys.stdout)
    Prints runtime diagnostics information.
    source code
     
    setupLogging(options)
    Set up logging based on command-line options.
    source code
     
    _setupLogfile(options)
    Sets up and creates logfile as needed.
    source code
     
    _setupFlowLogging(logfile, options)
    Sets up flow logging.
    source code
     
    _setupOutputLogging(logfile, options)
    Sets up command output logging.
    source code
     
    _setupDiskFlowLogging(flowLogger, logfile, options)
    Sets up on-disk flow logging.
    source code
     
    _setupScreenFlowLogging(flowLogger, options)
    Sets up on-screen flow logging.
    source code
     
    _setupDiskOutputLogging(outputLogger, logfile, options)
    Sets up on-disk command output logging.
    source code
     
    setupPathResolver(config)
    Set up the path resolver singleton based on configuration.
    source code
    Variables [hide private]
      DEFAULT_CONFIG = '/etc/cback.conf'
    The default configuration file.
      DEFAULT_LOGFILE = '/var/log/cback.log'
    The default log file path.
      DEFAULT_OWNERSHIP = ['root', 'adm']
    Default ownership for the logfile.
      DEFAULT_MODE = 416
    Default file permissions mode on the logfile.
      VALID_ACTIONS = ['collect', 'stage', 'store', 'purge', 'rebuil...
    List of valid actions.
      COMBINE_ACTIONS = ['collect', 'stage', 'store', 'purge']
    List of actions which can be combined with other actions.
      NONCOMBINE_ACTIONS = ['rebuild', 'validate', 'initialize', 'all']
    List of actions which cannot be combined with other actions.
      logger = logging.getLogger("CedarBackup2.log.cli")
      DISK_LOG_FORMAT = '%(asctime)s --> [%(levelname)-7s] %(message)s'
      DISK_OUTPUT_FORMAT = '%(message)s'
      SCREEN_LOG_FORMAT = '%(message)s'
      SCREEN_LOG_STREAM = sys.stdout
      DATE_FORMAT = '%Y-%m-%dT%H:%M:%S %Z'
      REBUILD_INDEX = 0
      VALIDATE_INDEX = 0
      INITIALIZE_INDEX = 0
      COLLECT_INDEX = 100
      STAGE_INDEX = 200
      STORE_INDEX = 300
      PURGE_INDEX = 400
      SHORT_SWITCHES = 'hVbqc:fMNl:o:m:OdsD'
      LONG_SWITCHES = ['help', 'version', 'verbose', 'quiet', 'confi...
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    cli()

    source code 

    Implements the command-line interface for the cback script.

    Essentially, this is the "main routine" for the cback script. It does all of the argument processing for the script, and then sets about executing the indicated actions.

    As a general rule, only the actions indicated on the command line will be executed. We will accept any of the built-in actions and any of the configured extended actions (which makes action list verification a two- step process).

    The 'all' action has a special meaning: it means that the built-in set of actions (collect, stage, store, purge) will all be executed, in that order. Extended actions will be ignored as part of the 'all' action.

    Raised exceptions always result in an immediate return. Otherwise, we generally return when all specified actions have been completed. Actions are ignored if the help, version or validate flags are set.

    A different error code is returned for each type of failure:

    • 1: The Python interpreter version is < 2.7
    • 2: Error processing command-line arguments
    • 3: Error configuring logging
    • 4: Error parsing indicated configuration file
    • 5: Backup was interrupted with a CTRL-C or similar
    • 6: Error executing specified backup actions
    Returns:
    Error code as described above.
    Notes:
    • This function contains a good amount of logging at the INFO level, because this is the right place to document high-level flow of control (i.e. what the command-line options were, what config file was being used, etc.)
    • We assume that anything that must be seen on the screen is logged at the ERROR level. Errors that occur before logging can be configured are written to sys.stderr.

    _usage(fd=sys.stdout)

    source code 

    Prints usage information for the cback script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _version(fd=sys.stdout)

    source code 

    Prints version information for the cback script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _diagnostics(fd=sys.stdout)

    source code 

    Prints runtime diagnostics information.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    setupLogging(options)

    source code 

    Set up logging based on command-line options.

    There are two kinds of logging: flow logging and output logging. Output logging contains information about system commands executed by Cedar Backup, for instance the calls to mkisofs or mount, etc. Flow logging contains error and informational messages used to understand program flow. Flow log messages and output log messages are written to two different loggers target (CedarBackup2.log and CedarBackup2.output). Flow log messages are written at the ERROR, INFO and DEBUG log levels, while output log messages are generally only written at the INFO log level.

    By default, output logging is disabled. When the options.output or options.debug flags are set, output logging will be written to the configured logfile. Output logging is never written to the screen.

    By default, flow logging is enabled at the ERROR level to the screen and at the INFO level to the configured logfile. If the options.quiet flag is set, flow logging is enabled at the INFO level to the configured logfile only (i.e. no output will be sent to the screen). If the options.verbose flag is set, flow logging is enabled at the INFO level to both the screen and the configured logfile. If the options.debug flag is set, flow logging is enabled at the DEBUG level to both the screen and the configured logfile.

    Parameters:
    • options (Options object) - Command-line options.
    Returns:
    Path to logfile on disk.

    _setupLogfile(options)

    source code 

    Sets up and creates logfile as needed.

    If the logfile already exists on disk, it will be left as-is, under the assumption that it was created with appropriate ownership and permissions. If the logfile does not exist on disk, it will be created as an empty file. Ownership and permissions will remain at their defaults unless user/group and/or mode are set in the options. We ignore errors setting the indicated user and group.

    Parameters:
    • options - Command-line options.
    Returns:
    Path to logfile on disk.

    Note: This function is vulnerable to a race condition. If the log file does not exist when the function is run, it will attempt to create the file as safely as possible (using O_CREAT). If two processes attempt to create the file at the same time, then one of them will fail. In practice, this shouldn't really be a problem, but it might happen occassionally if two instances of cback run concurrently or if cback collides with logrotate or something.

    _setupFlowLogging(logfile, options)

    source code 

    Sets up flow logging.

    Parameters:
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    _setupOutputLogging(logfile, options)

    source code 

    Sets up command output logging.

    Parameters:
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    _setupDiskFlowLogging(flowLogger, logfile, options)

    source code 

    Sets up on-disk flow logging.

    Parameters:
    • flowLogger - Python flow logger object.
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    _setupScreenFlowLogging(flowLogger, options)

    source code 

    Sets up on-screen flow logging.

    Parameters:
    • flowLogger - Python flow logger object.
    • options - Command-line options.

    _setupDiskOutputLogging(outputLogger, logfile, options)

    source code 

    Sets up on-disk command output logging.

    Parameters:
    • outputLogger - Python command output logger object.
    • logfile - Path to logfile on disk.
    • options - Command-line options.

    setupPathResolver(config)

    source code 

    Set up the path resolver singleton based on configuration.

    Cedar Backup's path resolver is implemented in terms of a singleton, the PathResolverSingleton class. This function takes options configuration, converts it into the dictionary form needed by the singleton, and then initializes the singleton. After that, any function that needs to resolve the path of a command can use the singleton.

    Parameters:
    • config (Config object) - Configuration

    Variables Details [hide private]

    VALID_ACTIONS

    List of valid actions.
    Value:
    ['collect',
     'stage',
     'store',
     'purge',
     'rebuild',
     'validate',
     'initialize',
     'all']
    

    LONG_SWITCHES

    Value:
    ['help',
     'version',
     'verbose',
     'quiet',
     'config=',
     'full',
     'managed',
     'managed-only',
    ...
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.tools.span-module.html0000664000175000017500000012521712642035643026776 0ustar pronovicpronovic00000000000000 CedarBackup2.tools.span
    Package CedarBackup2 :: Package tools :: Module span
    [hide private]
    [frames] | no frames]

    Module span

    source code

    Spans staged data among multiple discs

    This is the Cedar Backup span tool. It is intended for use by people who stage more data than can fit on a single disc. It allows a user to split staged data among more than one disc. It can't be an extension because it requires user input when switching media.

    Most configuration is taken from the Cedar Backup configuration file, specifically the store section. A few pieces of configuration are taken directly from the user.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      SpanOptions
    Tool-specific command-line options.
    Functions [hide private]
     
    cli()
    Implements the command-line interface for the cback-span script.
    source code
     
    _usage(fd=sys.stdout)
    Prints usage information for the cback script.
    source code
     
    _version(fd=sys.stdout)
    Prints version information for the cback script.
    source code
     
    _diagnostics(fd=sys.stdout)
    Prints runtime diagnostics information.
    source code
     
    _executeAction(options, config)
    Implements the guts of the cback-span tool.
    source code
     
    _findDailyDirs(stagingDir)
    Returns a list of all daily staging directories that have not yet been stored.
    source code
     
    _writeStoreIndicator(config, dailyDirs)
    Writes a store indicator file into daily directories.
    source code
     
    _getWriter(config)
    Gets a writer and media capacity from store configuration.
    source code
     
    _writeDisc(config, writer, spanItem)
    Writes a span item to disc.
    source code
     
    _discInitializeImage(config, writer, spanItem)
    Initialize an ISO image for a span item.
    source code
     
    _discWriteImage(config, writer)
    Writes a ISO image for a span item.
    source code
     
    _discConsistencyCheck(config, writer, spanItem)
    Run a consistency check on an ISO image for a span item.
    source code
     
    _consistencyCheck(config, fileList)
    Runs a consistency check against media in the backup device.
    source code
     
    _getYesNoAnswer(prompt, default)
    Get a yes/no answer from the user.
    source code
     
    _getChoiceAnswer(prompt, default, validChoices)
    Get a particular choice from the user.
    source code
     
    _getFloat(prompt, default)
    Get a floating point number from the user.
    source code
     
    _getReturn(prompt)
    Get a return key from the user.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.tools.span")
      __package__ = 'CedarBackup2.tools'
    Function Details [hide private]

    cli()

    source code 

    Implements the command-line interface for the cback-span script.

    Essentially, this is the "main routine" for the cback-span script. It does all of the argument processing for the script, and then also implements the tool functionality.

    This function looks pretty similiar to CedarBackup2.cli.cli(). It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication.

    A different error code is returned for each type of failure:

    • 1: The Python interpreter version is < 2.7
    • 2: Error processing command-line arguments
    • 3: Error configuring logging
    • 4: Error parsing indicated configuration file
    • 5: Backup was interrupted with a CTRL-C or similar
    • 6: Error executing other parts of the script
    Returns:
    Error code as described above.

    Note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively.

    _usage(fd=sys.stdout)

    source code 

    Prints usage information for the cback script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _version(fd=sys.stdout)

    source code 

    Prints version information for the cback script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _diagnostics(fd=sys.stdout)

    source code 

    Prints runtime diagnostics information.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _executeAction(options, config)

    source code 

    Implements the guts of the cback-span tool.

    Parameters:
    • options (SpanOptions object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • Exception - Under many generic error conditions

    _findDailyDirs(stagingDir)

    source code 

    Returns a list of all daily staging directories that have not yet been stored.

    The store indicator file cback.store will be written to a daily staging directory once that directory is written to disc. So, this function looks at each daily staging directory within the configured staging directory, and returns a list of those which do not contain the indicator file.

    Returned is a tuple containing two items: a list of daily staging directories, and a BackupFileList containing all files among those staging directories.

    Parameters:
    • stagingDir - Configured staging directory
    Returns:
    Tuple (staging dirs, backup file list)

    _writeStoreIndicator(config, dailyDirs)

    source code 

    Writes a store indicator file into daily directories.

    Parameters:
    • config - Config object.
    • dailyDirs - List of daily directories

    _getWriter(config)

    source code 

    Gets a writer and media capacity from store configuration. Returned is a writer and a media capacity in bytes.

    Parameters:
    • config - Cedar Backup configuration
    Returns:
    Tuple of (writer, mediaCapacity)

    _writeDisc(config, writer, spanItem)

    source code 

    Writes a span item to disc.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use
    • spanItem - Span item to write

    _discInitializeImage(config, writer, spanItem)

    source code 

    Initialize an ISO image for a span item.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use
    • spanItem - Span item to write

    _discWriteImage(config, writer)

    source code 

    Writes a ISO image for a span item.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use

    _discConsistencyCheck(config, writer, spanItem)

    source code 

    Run a consistency check on an ISO image for a span item.

    Parameters:
    • config - Cedar Backup configuration
    • writer - Writer to use
    • spanItem - Span item to write

    _consistencyCheck(config, fileList)

    source code 

    Runs a consistency check against media in the backup device.

    The function mounts the device at a temporary mount point in the working directory, and then compares the passed-in file list's digest map with the one generated from the disc. The two lists should be identical.

    If no exceptions are thrown, there were no problems with the consistency check.

    Parameters:
    • config - Config object.
    • fileList - BackupFileList whose contents to check against
    Raises:
    • ValueError - If the check fails
    • IOError - If there is a problem working with the media.

    Warning: The implementation of this function is very UNIX-specific.

    _getYesNoAnswer(prompt, default)

    source code 

    Get a yes/no answer from the user. The default will be placed at the end of the prompt. A "Y" or "y" is considered yes, anything else no. A blank (empty) response results in the default.

    Parameters:
    • prompt - Prompt to show.
    • default - Default to set if the result is blank
    Returns:
    Boolean true/false corresponding to Y/N

    _getChoiceAnswer(prompt, default, validChoices)

    source code 

    Get a particular choice from the user. The default will be placed at the end of the prompt. The function loops until getting a valid choice. A blank (empty) response results in the default.

    Parameters:
    • prompt - Prompt to show.
    • default - Default to set if the result is None or blank.
    • validChoices - List of valid choices (strings)
    Returns:
    Valid choice from user.

    _getFloat(prompt, default)

    source code 

    Get a floating point number from the user. The default will be placed at the end of the prompt. The function loops until getting a valid floating point number. A blank (empty) response results in the default.

    Parameters:
    • prompt - Prompt to show.
    • default - Default to set if the result is None or blank.
    Returns:
    Floating point number from user

    _getReturn(prompt)

    source code 

    Get a return key from the user.

    Parameters:
    • prompt - Prompt to show.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend-module.html0000664000175000017500000000215712642035643026745 0ustar pronovicpronovic00000000000000 extend

    Module extend


    Variables


    [hide private] CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.action-module.html0000664000175000017500000000211612642035643026726 0ustar pronovicpronovic00000000000000 action

    Module action


    Variables

    __package__

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.dvdwriter.DvdWriter-class.html0000664000175000017500000023200012642035644032125 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter.DvdWriter
    Package CedarBackup2 :: Package writers :: Module dvdwriter :: Class DvdWriter
    [hide private]
    [frames] | no frames]

    Class DvdWriter

    source code

    object --+
             |
            DvdWriter
    

    Class representing a device that knows how to write some kinds of DVD media.

    Summary

    This is a class representing a device that knows how to write some kinds of DVD media. It provides common operations for the device, such as ejecting the media and writing data to the media.

    This class is implemented in terms of the eject and growisofs utilities, all of which should be available on most UN*X platforms.

    Image Writer Interface

    The following methods make up the "image writer" interface shared with other kinds of writers:

      __init__
      initializeImage()
      addImageEntry()
      writeImage()
      setImageNewDisc()
      retrieveCapacity()
      getEstimatedImageSize()
    

    Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer.

    The media attribute is also assumed to be available.

    Unlike the CdWriter, the DvdWriter can only operate in terms of filesystem devices, not SCSI devices. So, although the constructor interface accepts a SCSI device parameter for the sake of compatibility, it's not used.

    Media Types

    This class knows how to write to DVD+R and DVD+RW media, represented by the following constants:

    • MEDIA_DVDPLUSR: DVD+R media (4.4 GB capacity)
    • MEDIA_DVDPLUSRW: DVD+RW media (4.4 GB capacity)

    The difference is that DVD+RW media can be rewritten, while DVD+R media cannot be (although at present, DvdWriter does not really differentiate between rewritable and non-rewritable media).

    The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte.

    The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type.

    Device Attributes vs. Media Attributes

    As with the cdwriter functionality, a given dvdwriter instance has two different kinds of attributes associated with it. I call these device attributes and media attributes.

    Device attributes are things which can be determined without looking at the media. Media attributes are attributes which vary depending on the state of the media. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls.

    Compared to cdwriters, dvdwriters have very few attributes. This is due to differences between the way growisofs works relative to cdrecord.

    Media Capacity

    One major difference between the cdrecord/mkisofs utilities used by the cdwriter class and the growisofs utility used here is that the process of estimating remaining capacity and image size is more straightforward with cdrecord/mkisofs than with growisofs.

    In this class, remaining capacity is calculated by asking doing a dry run of growisofs and grabbing some information from the output of that command. Image size is estimated by asking the IsoImage class for an estimate and then adding on a "fudge factor" determined through experimentation.

    Testing

    It's rather difficult to test this code in an automated fashion, even if you have access to a physical DVD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to.

    Because of this, some of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the "difficult" functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all.

    Instance Methods [hide private]
     
    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=2, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    Initializes a DVD writer object.
    source code
     
    isRewritable(self)
    Indicates whether the media is rewritable per configuration.
    source code
     
    retrieveCapacity(self, entireDisc=False)
    Retrieves capacity for the current media in terms of a MediaCapacity object.
    source code
     
    openTray(self)
    Opens the device's tray and leaves it open.
    source code
     
    closeTray(self)
    Closes the device's tray.
    source code
     
    refreshMedia(self)
    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.
    source code
     
    initializeImage(self, newDisc, tmpdir, mediaLabel=None)
    Initializes the writer's associated ISO image.
    source code
     
    addImageEntry(self, path, graftPoint)
    Adds a filepath entry to the writer's associated ISO image.
    source code
     
    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)
    Writes an ISO image to the media in the device.
    source code
     
    setImageNewDisc(self, newDisc)
    Resets (overrides) the newDisc flag on the internal image.
    source code
     
    getEstimatedImageSize(self)
    Gets the estimated size of the image associated with the writer.
    source code
     
    _writeImage(self, newDisc, imagePath, entries, mediaLabel=None)
    Writes an image to disc using either an entries list or an ISO image on disk.
    source code
     
    _getDevice(self)
    Property target used to get the device value.
    source code
     
    _getScsiId(self)
    Property target used to get the SCSI id value.
    source code
     
    _getHardwareId(self)
    Property target used to get the hardware id value.
    source code
     
    _getDriveSpeed(self)
    Property target used to get the drive speed.
    source code
     
    _getMedia(self)
    Property target used to get the media description.
    source code
     
    _getDeviceHasTray(self)
    Property target used to get the device-has-tray flag.
    source code
     
    _getDeviceCanEject(self)
    Property target used to get the device-can-eject flag.
    source code
     
    _getRefreshMediaDelay(self)
    Property target used to get the configured refresh media delay, in seconds.
    source code
     
    _getEjectDelay(self)
    Property target used to get the configured eject delay, in seconds.
    source code
     
    unlockTray(self)
    Unlocks the device's tray via 'eject -i off'.
    source code
     
    _retrieveSectorsUsed(self)
    Retrieves the number of sectors used on the current media.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _getEstimatedImageSize(entries)
    Gets the estimated size of a set of image entries.
    source code
     
    _searchForOverburn(output)
    Search for an "overburn" error message in growisofs output.
    source code
     
    _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False)
    Builds a list of arguments to be passed to a growisofs command.
    source code
     
    _parseSectorsUsed(output)
    Parse sectors used information out of growisofs output.
    source code
    Properties [hide private]
      device
    Filesystem device name for this writer.
      scsiId
    SCSI id for the device (saved for reference only).
      hardwareId
    Hardware id for this writer (always the device path).
      driveSpeed
    Speed at which the drive writes.
      media
    Definition of media that is expected to be in the device.
      deviceHasTray
    Indicates whether the device has a media tray.
      deviceCanEject
    Indicates whether the device supports ejecting its media.
      refreshMediaDelay
    Refresh media delay, in seconds.
      ejectDelay
    Eject delay, in seconds.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=2, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    (Constructor)

    source code 

    Initializes a DVD writer object.

    Since growisofs can only address devices using the device path (i.e. /dev/dvd), the hardware id will always be set based on the device. If passed in, it will be saved for reference purposes only.

    We have no way to query the device to ask whether it has a tray or can be safely opened and closed. So, the noEject flag is used to set these values. If noEject=False, then we assume a tray exists and open/close is safe. If noEject=True, then we assume that there is no tray and open/close is not safe.

    Parameters:
    • device (Absolute path to a filesystem device, i.e. /dev/dvd) - Filesystem device associated with this writer.
    • scsiId (If provided, SCSI id in the form [<method>:]scsibus,target,lun) - SCSI id for the device (optional, for reference only).
    • driveSpeed (Use 2 for 2x device, etc. or None to use device default.) - Speed at which the drive writes.
    • mediaType (One of the valid media type as discussed above.) - Type of the media that is assumed to be in the drive.
    • noEject (Boolean true/false) - Tells Cedar Backup that the device cannot safely be ejected
    • refreshMediaDelay (Number of seconds, an integer >= 0) - Refresh media delay to use, if any
    • ejectDelay (Number of seconds, an integer >= 0) - Eject delay to use, if any
    • unittest (Boolean true/false) - Turns off certain validations, for use in unit testing.
    Raises:
    • ValueError - If the device is not valid for some reason.
    • ValueError - If the SCSI id is not in a valid form.
    • ValueError - If the drive speed is not an integer >= 1.
    Overrides: object.__init__

    Note: The unittest parameter should never be set to True outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose.

    retrieveCapacity(self, entireDisc=False)

    source code 

    Retrieves capacity for the current media in terms of a MediaCapacity object.

    If entireDisc is passed in as True, the capacity will be for the entire disc, as if it were to be rewritten from scratch. The same will happen if the disc can't be read for some reason. Otherwise, the capacity will be calculated by subtracting the sectors currently used on the disc, as reported by growisofs itself.

    Parameters:
    • entireDisc (Boolean true/false) - Indicates whether to return capacity for entire disc.
    Returns:
    MediaCapacity object describing the capacity of the media.
    Raises:
    • ValueError - If there is a problem parsing the growisofs output
    • IOError - If the media could not be read for some reason.

    openTray(self)

    source code 

    Opens the device's tray and leaves it open.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag.

    Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy.

    Raises:
    • IOError - If there is an error talking to the device.

    closeTray(self)

    source code 

    Closes the device's tray.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    Raises:
    • IOError - If there is an error talking to the device.

    refreshMedia(self)

    source code 

    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.

    Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.)

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though.

    Raises:
    • IOError - If there is an error talking to the device.

    initializeImage(self, newDisc, tmpdir, mediaLabel=None)

    source code 

    Initializes the writer's associated ISO image.

    This method initializes the image instance variable so that the caller can use the addImageEntry method. Once entries have been added, the writeImage method can be called with no arguments.

    Parameters:
    • newDisc (Boolean true/false) - Indicates whether the disc should be re-initialized
    • tmpdir (String representing a directory path on disk) - Temporary directory to use if needed
    • mediaLabel (String, no more than 25 characters long) - Media label to be applied to the image, if any

    addImageEntry(self, path, graftPoint)

    source code 

    Adds a filepath entry to the writer's associated ISO image.

    The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass None.

    Parameters:
    • path (String representing a path on disk) - File or directory to be added to the image
    • graftPoint (String representing a graft point path, as described above) - Graft point to be used when adding this entry
    Raises:
    • ValueError - If initializeImage() was not previously called
    • ValueError - If the path is not a valid file or directory

    Note: Before calling this method, you must call initializeImage.

    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)

    source code 

    Writes an ISO image to the media in the device.

    If newDisc is passed in as True, we assume that the entire disc will be re-created from scratch. Note that unlike CdWriter, DvdWriter does not blank rewritable media before reusing it; however, growisofs is called such that the media will be re-initialized as needed.

    If imagePath is passed in as None, then the existing image configured with initializeImage() will be used. Under these circumstances, the passed-in newDisc flag will be ignored and the value passed in to initializeImage() will apply instead.

    The writeMulti argument is ignored. It exists for compatibility with the Cedar Backup image writer interface.

    Parameters:
    • imagePath (String representing a path on disk) - Path to an ISO image on disk, or None to use writer's image
    • newDisc (Boolean true/false.) - Indicates whether the disc should be re-initialized
    • writeMulti (Boolean true/false) - Unused
    Raises:
    • ValueError - If the image path is not absolute.
    • ValueError - If some path cannot be encoded properly.
    • IOError - If the media could not be written to for some reason.
    • ValueError - If no image is passed in and initializeImage() was not previously called

    Note: The image size indicated in the log ("Image size will be...") is an estimate. The estimate is conservative and is probably larger than the actual space that dvdwriter will use.

    setImageNewDisc(self, newDisc)

    source code 

    Resets (overrides) the newDisc flag on the internal image.

    Parameters:
    • newDisc - New disc flag to set
    Raises:
    • ValueError - If initializeImage() was not previously called

    getEstimatedImageSize(self)

    source code 

    Gets the estimated size of the image associated with the writer.

    This is an estimate and is conservative. The actual image could be as much as 450 blocks (sectors) smaller under some circmstances.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.
    • ValueError - If initializeImage() was not previously called

    _writeImage(self, newDisc, imagePath, entries, mediaLabel=None)

    source code 

    Writes an image to disc using either an entries list or an ISO image on disk.

    Callers are assumed to have done validation on paths, etc. before calling this method.

    Parameters:
    • newDisc - Indicates whether the disc should be re-initialized
    • imagePath - Path to an ISO image on disk, or c{None} to use entries
    • entries - Mapping from path to graft point, or None to use imagePath
    Raises:
    • IOError - If the media could not be written to for some reason.

    _getEstimatedImageSize(entries)
    Static Method

    source code 

    Gets the estimated size of a set of image entries.

    This is implemented in terms of the IsoImage class. The returned value is calculated by adding a "fudge factor" to the value from IsoImage. This fudge factor was determined by experimentation and is conservative -- the actual image could be as much as 450 blocks smaller under some circumstances.

    Parameters:
    • entries - Dictionary mapping path to graft point.
    Returns:
    Total estimated size of image, in bytes.
    Raises:
    • ValueError - If there are no entries in the dictionary
    • ValueError - If any path in the dictionary does not exist
    • IOError - If there is a problem calling mkisofs.

    _searchForOverburn(output)
    Static Method

    source code 

    Search for an "overburn" error message in growisofs output.

    The growisofs command returns a non-zero exit code and puts a message into the output -- even on a dry run -- if there is not enough space on the media. This is called an "overburn" condition.

    The error message looks like this:

      :-( /dev/cdrom: 894048 blocks are free, 2033746 to be written!
    

    This method looks for the overburn error message anywhere in the output. If a matching error message is found, an IOError exception is raised containing relevant information about the problem. Otherwise, the method call returns normally.

    Parameters:
    • output - List of output lines to search, as from executeCommand
    Raises:
    • IOError - If an overburn condition is found.

    _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False)
    Static Method

    source code 

    Builds a list of arguments to be passed to a growisofs command.

    The arguments will either cause growisofs to write the indicated image file to disc, or will pass growisofs a list of directories or files that should be written to disc.

    If a new image is created, it will always be created with Rock Ridge extensions (-r). A volume name will be applied (-V) if mediaLabel is not None.

    Parameters:
    • newDisc - Indicates whether the disc should be re-initialized
    • hardwareId - Hardware id for the device
    • driveSpeed - Speed at which the drive writes.
    • imagePath - Path to an ISO image on disk, or c{None} to use entries
    • entries - Mapping from path to graft point, or None to use imagePath
    • mediaLabel - Media label to set on the image, if any
    • dryRun - Says whether to make this a dry run (for checking capacity)
    Returns:
    List suitable for passing to util.executeCommand as args.
    Raises:
    • ValueError - If caller does not pass one or the other of imagePath or entries.
    Notes:
    • If we write an existing image to disc, then the mediaLabel is ignored. The media label is an attribute of the image, and should be set on the image when it is created.
    • We always pass the undocumented option -use-the-force-like=tty to growisofs. Without this option, growisofs will refuse to execute certain actions when running from cron. A good example is -Z, which happily overwrites an existing DVD from the command-line, but fails when run from cron. It took a while to figure that out, since it worked every time I tested it by hand. :(

    unlockTray(self)

    source code 

    Unlocks the device's tray via 'eject -i off'.

    Raises:
    • IOError - If there is an error talking to the device.

    _retrieveSectorsUsed(self)

    source code 

    Retrieves the number of sectors used on the current media.

    This is a little ugly. We need to call growisofs in "dry-run" mode and parse some information from its output. However, to do that, we need to create a dummy file that we can pass to the command -- and we have to make sure to remove it later.

    Once growisofs has been run, then we call _parseSectorsUsed to parse the output and calculate the number of sectors used on the media.

    Returns:
    Number of sectors used on the media

    _parseSectorsUsed(output)
    Static Method

    source code 

    Parse sectors used information out of growisofs output.

    The first line of a growisofs run looks something like this:

      Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566'
    

    Dmitry has determined that the seek value in this line gives us information about how much data has previously been written to the media. That value multiplied by 16 yields the number of sectors used.

    If the seek line cannot be found in the output, then sectors used of zero is assumed.

    Returns:
    Sectors used on the media, as a floating point number.
    Raises:
    • ValueError - If the output cannot be parsed properly.

    Property Details [hide private]

    device

    Filesystem device name for this writer.

    Get Method:
    _getDevice(self) - Property target used to get the device value.

    scsiId

    SCSI id for the device (saved for reference only).

    Get Method:
    _getScsiId(self) - Property target used to get the SCSI id value.

    hardwareId

    Hardware id for this writer (always the device path).

    Get Method:
    _getHardwareId(self) - Property target used to get the hardware id value.

    driveSpeed

    Speed at which the drive writes.

    Get Method:
    _getDriveSpeed(self) - Property target used to get the drive speed.

    media

    Definition of media that is expected to be in the device.

    Get Method:
    _getMedia(self) - Property target used to get the media description.

    deviceHasTray

    Indicates whether the device has a media tray.

    Get Method:
    _getDeviceHasTray(self) - Property target used to get the device-has-tray flag.

    deviceCanEject

    Indicates whether the device supports ejecting its media.

    Get Method:
    _getDeviceCanEject(self) - Property target used to get the device-can-eject flag.

    refreshMediaDelay

    Refresh media delay, in seconds.

    Get Method:
    _getRefreshMediaDelay(self) - Property target used to get the configured refresh media delay, in seconds.

    ejectDelay

    Eject delay, in seconds.

    Get Method:
    _getEjectDelay(self) - Property target used to get the configured eject delay, in seconds.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.cli._ActionSet-class.html0000664000175000017500000013246612642035643027320 0ustar pronovicpronovic00000000000000 CedarBackup2.cli._ActionSet
    Package CedarBackup2 :: Module cli :: Class _ActionSet
    [hide private]
    [frames] | no frames]

    Class _ActionSet

    source code

    object --+
             |
            _ActionSet
    

    Class representing a set of local actions to be executed.

    This class does four different things. First, it ensures that the actions specified on the command-line are sensible. The command-line can only list either built-in actions or extended actions specified in configuration. Also, certain actions (in NONCOMBINE_ACTIONS) cannot be combined with other actions.

    Second, the class enforces an execution order on the specified actions. Any time actions are combined on the command line (either built-in actions or extended actions), we must make sure they get executed in a sensible order.

    Third, the class ensures that any pre-action or post-action hooks are scheduled and executed appropriately. Hooks are configured by building a dictionary mapping between hook action name and command. Pre-action hooks are executed immediately before their associated action, and post-action hooks are executed immediately after their associated action.

    Finally, the class properly interleaves local and managed actions so that the same action gets executed first locally and then on managed peers.

    Instance Methods [hide private]
     
    __init__(self, actions, extensions, options, peers, managed, local)
    Constructor for the _ActionSet class.
    source code
     
    executeActions(self, configPath, options, config)
    Executes all actions and extended actions, in the proper order.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _deriveExtensionNames(extensions)
    Builds a list of extended actions that are available in configuration.
    source code
     
    _buildHookMaps(hooks)
    Build two mappings from action name to configured ActionHook.
    source code
     
    _buildFunctionMap(extensions)
    Builds a mapping from named action to action function.
    source code
     
    _buildIndexMap(extensions)
    Builds a mapping from action name to proper execution index.
    source code
     
    _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap)
    Builds a mapping from action name to list of action items.
    source code
     
    _buildPeerMap(options, peers)
    Build a mapping from action name to list of remote peers.
    source code
     
    _deriveHooks(action, preHookDict, postHookDict)
    Derive pre- and post-action hooks, if any, associated with named action.
    source code
     
    _validateActions(actions, extensionNames)
    Validate that the set of specified actions is sensible.
    source code
     
    _buildActionSet(actions, actionMap)
    Build set of actions to be executed.
    source code
     
    _getRemoteUser(options, remotePeer)
    Gets the remote user associated with a remote peer.
    source code
     
    _getRshCommand(options, remotePeer)
    Gets the RSH command associated with a remote peer.
    source code
     
    _getCbackCommand(options, remotePeer)
    Gets the cback command associated with a remote peer.
    source code
     
    _getManagedActions(options, remotePeer)
    Gets the managed actions list associated with a remote peer.
    source code
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, actions, extensions, options, peers, managed, local)
    (Constructor)

    source code 

    Constructor for the _ActionSet class.

    This is kind of ugly, because the constructor has to set up a lot of data before being able to do anything useful. The following data structures are initialized based on the input:

    • extensionNames: List of extensions available in configuration
    • preHookMap: Mapping from action name to list of PreActionHook
    • postHookMap: Mapping from action name to list of PostActionHook
    • functionMap: Mapping from action name to Python function
    • indexMap: Mapping from action name to execution index
    • peerMap: Mapping from action name to set of RemotePeer
    • actionMap: Mapping from action name to _ActionItem

    Once these data structures are set up, the command line is validated to make sure only valid actions have been requested, and in a sensible combination. Then, all of the data is used to build self.actionSet, the set action items to be executed by executeActions(). This list might contain either _ActionItem or _ManagedActionItem.

    Parameters:
    • actions - Names of actions specified on the command-line.
    • extensions - Extended action configuration (i.e. config.extensions)
    • options - Options configuration (i.e. config.options)
    • peers - Peers configuration (i.e. config.peers)
    • managed - Whether to include managed actions in the set
    • local - Whether to include local actions in the set
    Raises:
    • ValueError - If one of the specified actions is invalid.
    Overrides: object.__init__

    executeActions(self, configPath, options, config)

    source code 

    Executes all actions and extended actions, in the proper order.

    Each action (whether built-in or extension) is executed in an identical manner. The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action functions.
    • config - Parsed configuration to be passed to action functions.
    Raises:
    • Exception - If there is a problem executing the actions.

    _deriveExtensionNames(extensions)
    Static Method

    source code 

    Builds a list of extended actions that are available in configuration.

    Parameters:
    • extensions - Extended action configuration (i.e. config.extensions)
    Returns:
    List of extended action names.

    _buildHookMaps(hooks)
    Static Method

    source code 

    Build two mappings from action name to configured ActionHook.

    Parameters:
    • hooks - List of pre- and post-action hooks (i.e. config.options.hooks)
    Returns:
    Tuple of (pre hook dictionary, post hook dictionary).

    _buildFunctionMap(extensions)
    Static Method

    source code 

    Builds a mapping from named action to action function.

    Parameters:
    • extensions - Extended action configuration (i.e. config.extensions)
    Returns:
    Dictionary mapping action to function.

    _buildIndexMap(extensions)
    Static Method

    source code 

    Builds a mapping from action name to proper execution index.

    If extensions configuration is None, or there are no configured extended actions, the ordering dictionary will only include the built-in actions and their standard indices.

    Otherwise, if the extensions order mode is None or "index", actions will scheduled by explicit index; and if the extensions order mode is "dependency", actions will be scheduled using a dependency graph.

    Parameters:
    • extensions - Extended action configuration (i.e. config.extensions)
    Returns:
    Dictionary mapping action name to integer execution index.

    _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap)
    Static Method

    source code 

    Builds a mapping from action name to list of action items.

    We build either _ActionItem or _ManagedActionItem objects here.

    In most cases, the mapping from action name to _ActionItem is 1:1. The exception is the "all" action, which is a special case. However, a list is returned in all cases, just for consistency later. Each _ActionItem will be created with a proper function reference and index value for execution ordering.

    The mapping from action name to _ManagedActionItem is always 1:1. Each managed action item contains a list of peers which the action should be executed.

    Parameters:
    • managed - Whether to include managed actions in the set
    • local - Whether to include local actions in the set
    • extensionNames - List of valid extended action names
    • functionMap - Dictionary mapping action name to Python function
    • indexMap - Dictionary mapping action name to integer execution index
    • preHookMap - Dictionary mapping action name to pre hooks (if any) for the action
    • postHookMap - Dictionary mapping action name to post hooks (if any) for the action
    • peerMap - Dictionary mapping action name to list of remote peers on which to execute the action
    Returns:
    Dictionary mapping action name to list of _ActionItem objects.

    _buildPeerMap(options, peers)
    Static Method

    source code 

    Build a mapping from action name to list of remote peers.

    There will be one entry in the mapping for each managed action. If there are no managed peers, the mapping will be empty. Only managed actions will be listed in the mapping.

    Parameters:
    • options - Option configuration (i.e. config.options)
    • peers - Peers configuration (i.e. config.peers)

    _deriveHooks(action, preHookDict, postHookDict)
    Static Method

    source code 

    Derive pre- and post-action hooks, if any, associated with named action.

    Parameters:
    • action - Name of action to look up
    • preHookDict - Dictionary mapping pre-action hooks to action name
    • postHookDict - Dictionary mapping post-action hooks to action name @return Tuple (preHooks, postHooks) per mapping, with None values if there is no hook.

    _validateActions(actions, extensionNames)
    Static Method

    source code 

    Validate that the set of specified actions is sensible.

    Any specified action must either be a built-in action or must be among the extended actions defined in configuration. The actions from within NONCOMBINE_ACTIONS may not be combined with other actions.

    Parameters:
    • actions - Names of actions specified on the command-line.
    • extensionNames - Names of extensions specified in configuration.
    Raises:
    • ValueError - If one or more configured actions are not valid.

    _buildActionSet(actions, actionMap)
    Static Method

    source code 

    Build set of actions to be executed.

    The set of actions is built in the proper order, so executeActions can spin through the set without thinking about it. Since we've already validated that the set of actions is sensible, we don't take any precautions here to make sure things are combined properly. If the action is listed, it will be "scheduled" for execution.

    Parameters:
    • actions - Names of actions specified on the command-line.
    • actionMap - Dictionary mapping action name to _ActionItem object.
    Returns:
    Set of action items in proper order.

    _getRemoteUser(options, remotePeer)
    Static Method

    source code 

    Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    Name of remote user associated with remote peer.

    _getRshCommand(options, remotePeer)
    Static Method

    source code 

    Gets the RSH command associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    RSH command associated with remote peer.

    _getCbackCommand(options, remotePeer)
    Static Method

    source code 

    Gets the cback command associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    cback command associated with remote peer.

    _getManagedActions(options, remotePeer)
    Static Method

    source code 

    Gets the managed actions list associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • options - OptionsConfig object, as from config.options
    • remotePeer - Configuration-style remote peer object.
    Returns:
    Set of managed actions associated with remote peer.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.sysinfo-module.html0000664000175000017500000005743112642035643027660 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.sysinfo
    Package CedarBackup2 :: Package extend :: Module sysinfo
    [hide private]
    [frames] | no frames]

    Module sysinfo

    source code

    Provides an extension to save off important system recovery information.

    This is a simple Cedar Backup extension used to save off important system recovery information. It saves off three types of information:

    • Currently-installed Debian packages via dpkg --get-selections
    • Disk partition information via fdisk -l
    • System-wide mounted filesystem contents, via ls -laR

    The saved-off information is placed into the collect directory and is compressed using bzip2 to save space.

    This extension relies on the options and collect configurations in the standard Cedar Backup configuration file, but requires no new configuration of its own. No public functions other than the action are exposed since all of this is pretty simple.


    Note: If the dpkg or fdisk commands cannot be found in their normal locations or executed by the current user, those steps will be skipped and a note will be logged at the INFO level.

    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the sysinfo backup action.
    source code
     
    _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True)
    Dumps a list of currently installed Debian packages via dpkg.
    source code
     
    _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True)
    Dumps information about the partition table via fdisk.
    source code
     
    _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True)
    Dumps complete listing of filesystem contents via ls -laR.
    source code
     
    _getOutputFile(targetDir, name, compress=True)
    Opens the output file used for saving a dump to the filesystem.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.sysinfo")
      DPKG_PATH = '/usr/bin/dpkg'
      FDISK_PATH = '/sbin/fdisk'
      DPKG_COMMAND = ['/usr/bin/dpkg', '--get-selections']
      FDISK_COMMAND = ['/sbin/fdisk', '-l']
      LS_COMMAND = ['ls', '-laR', '/']
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the sysinfo backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If the backup process fails for some reason.

    _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True)

    source code 

    Dumps a list of currently installed Debian packages via dpkg.

    Parameters:
    • targetDir - Directory to write output file into.
    • backupUser - User which should own the resulting file.
    • backupGroup - Group which should own the resulting file.
    • compress - Indicates whether to compress the output file.
    Raises:
    • IOError - If the dump fails for some reason.

    _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True)

    source code 

    Dumps information about the partition table via fdisk.

    Parameters:
    • targetDir - Directory to write output file into.
    • backupUser - User which should own the resulting file.
    • backupGroup - Group which should own the resulting file.
    • compress - Indicates whether to compress the output file.
    Raises:
    • IOError - If the dump fails for some reason.

    _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True)

    source code 

    Dumps complete listing of filesystem contents via ls -laR.

    Parameters:
    • targetDir - Directory to write output file into.
    • backupUser - User which should own the resulting file.
    • backupGroup - Group which should own the resulting file.
    • compress - Indicates whether to compress the output file.
    Raises:
    • IOError - If the dump fails for some reason.

    _getOutputFile(targetDir, name, compress=True)

    source code 

    Opens the output file used for saving a dump to the filesystem.

    The filename will be name.txt (or name.txt.bz2 if compress is True), written in the target directory.

    Parameters:
    • targetDir - Target directory to write file in.
    • name - Name of the file to create.
    • compress - Indicates whether to write compressed output.
    Returns:
    Tuple of (Output file object, filename)

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.tools-module.html0000664000175000017500000001612012642035643026026 0ustar pronovicpronovic00000000000000 CedarBackup2.tools
    Package CedarBackup2 :: Package tools
    [hide private]
    [frames] | no frames]

    Package tools

    source code

    Official Cedar Backup Tools

    This package provides official Cedar Backup tools. Tools are things that feel a little like extensions, but don't fit the normal mold of extensions. For instance, they might not be intended to run from cron, or might need to interact dynamically with the user (i.e. accept user input).

    Tools are usually scripts that are run directly from the command line, just like the main cback script. Like the cback script, the majority of a tool is implemented in a .py module, and then the script just invokes the module's cli() function. The actual scripts for tools are distributed in the util/ directory.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.26.5/doc/interface/CedarBackup2.filesystem-module.html0000664000175000017500000004260412642035643027060 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem
    Package CedarBackup2 :: Module filesystem
    [hide private]
    [frames] | no frames]

    Module filesystem

    source code

    Provides filesystem-related objects.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      FilesystemList
    Represents a list of filesystem items.
      BackupFileList
    List of files to be backed up.
      PurgeItemList
    List of files and directories to be purged.
      SpanItem
    Item returned by BackupFileList.generateSpan.
    Functions [hide private]
     
    normalizeDir(path)
    Normalizes a directory name.
    source code
     
    compareContents(path1, path2, verbose=False)
    Compares the contents of two directories to see if they are equivalent.
    source code
     
    compareDigestMaps(digest1, digest2, verbose=False)
    Compares two digest maps and throws an exception if they differ.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.filesystem")
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    normalizeDir(path)

    source code 

    Normalizes a directory name.

    For our purposes, a directory name is normalized by removing the trailing path separator, if any. This is important because we want directories to appear within lists in a consistent way, although from the user's perspective passing in /path/to/dir/ and /path/to/dir are equivalent.

    Parameters:
    • path (String representing a path on disk) - Path to be normalized.
    Returns:
    Normalized path, which should be equivalent to the original.

    compareContents(path1, path2, verbose=False)

    source code 

    Compares the contents of two directories to see if they are equivalent.

    The two directories are recursively compared. First, we check whether they contain exactly the same set of files. Then, we check to see every given file has exactly the same contents in both directories.

    This is all relatively simple to implement through the magic of BackupFileList.generateDigestMap, which knows how to strip a path prefix off the front of each entry in the mapping it generates. This makes our comparison as simple as creating a list for each path, then generating a digest map for each path and comparing the two.

    If no exception is thrown, the two directories are considered identical.

    If the verbose flag is True, then an alternate (but slower) method is used so that any thrown exception can indicate exactly which file caused the comparison to fail. The thrown ValueError exception distinguishes between the directories containing different files, and containing the same files with differing content.

    Parameters:
    • path1 (String representing a path on disk) - First path to compare.
    • path2 (String representing a path on disk) - First path to compare.
    • verbose (Boolean) - Indicates whether a verbose response should be given.
    Raises:
    • ValueError - If a directory doesn't exist or can't be read.
    • ValueError - If the two directories are not equivalent.
    • IOError - If there is an unusual problem reading the directories.

    Note: Symlinks are not followed for the purposes of this comparison.

    compareDigestMaps(digest1, digest2, verbose=False)

    source code 

    Compares two digest maps and throws an exception if they differ.

    Parameters:
    • digest1 (Digest as returned from BackupFileList.generateDigestMap()) - First digest to compare.
    • digest2 (Digest as returned from BackupFileList.generateDigestMap()) - Second digest to compare.
    • verbose (Boolean) - Indicates whether a verbose response should be given.
    Raises:
    • ValueError - If the two directories are not equivalent.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.LocalPeer-class.html0000664000175000017500000006637712642035644027644 0ustar pronovicpronovic00000000000000 CedarBackup2.config.LocalPeer
    Package CedarBackup2 :: Module config :: Class LocalPeer
    [hide private]
    [frames] | no frames]

    Class LocalPeer

    source code

    object --+
             |
            LocalPeer
    

    Class representing a Cedar Backup peer.

    The following restrictions exist on data in this class:

    • The peer name must be a non-empty string.
    • The collect directory must be an absolute path.
    • The ignore failure mode must be one of the values in VALID_FAILURE_MODES.
    Instance Methods [hide private]
     
    __init__(self, name=None, collectDir=None, ignoreFailureMode=None)
    Constructor for the LocalPeer class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      name
    Name of the peer, typically a valid hostname.
      collectDir
    Collect directory to stage files from on peer.
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, collectDir=None, ignoreFailureMode=None)
    (Constructor)

    source code 

    Constructor for the LocalPeer class.

    Parameters:
    • name - Name of the peer, typically a valid hostname.
    • collectDir - Collect directory to stage files from on peer.
    • ignoreFailureMode - Ignore failure mode for peer.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    name

    Name of the peer, typically a valid hostname.

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Collect directory to stage files from on peer.

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.image-module.html0000664000175000017500000000211312642035643026530 0ustar pronovicpronovic00000000000000 image

    Module image


    Variables

    __package__

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.CollectDir-class.html0000664000175000017500000016025312642035643030005 0ustar pronovicpronovic00000000000000 CedarBackup2.config.CollectDir
    Package CedarBackup2 :: Module config :: Class CollectDir
    [hide private]
    [frames] | no frames]

    Class CollectDir

    source code

    object --+
             |
            CollectDir
    

    Class representing a Cedar Backup collect directory.

    The following restrictions exist on data in this class:

    • Absolute paths must be absolute
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The archive mode must be one of the values in VALID_ARCHIVE_MODES.
    • The ignore file must be a non-empty string.

    For the absoluteExcludePaths list, validation is accomplished through the util.AbsolutePathList list implementation that overrides common list methods and transparently does the absolute path validation for us.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, linkDepth=None, dereference=False, recursionLevel=None)
    Constructor for the CollectDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setArchiveMode(self, value)
    Property target used to set the archive mode.
    source code
     
    _getArchiveMode(self)
    Property target used to get the archive mode.
    source code
     
    _setIgnoreFile(self, value)
    Property target used to set the ignore file.
    source code
     
    _getIgnoreFile(self)
    Property target used to get the ignore file.
    source code
     
    _setLinkDepth(self, value)
    Property target used to set the link depth.
    source code
     
    _getLinkDepth(self)
    Property target used to get the action linkDepth.
    source code
     
    _setDereference(self, value)
    Property target used to set the dereference flag.
    source code
     
    _getDereference(self)
    Property target used to get the dereference flag.
    source code
     
    _setRecursionLevel(self, value)
    Property target used to set the recursionLevel.
    source code
     
    _getRecursionLevel(self)
    Property target used to get the action recursionLevel.
    source code
     
    _setAbsoluteExcludePaths(self, value)
    Property target used to set the absolute exclude paths list.
    source code
     
    _getAbsoluteExcludePaths(self)
    Property target used to get the absolute exclude paths list.
    source code
     
    _setRelativeExcludePaths(self, value)
    Property target used to set the relative exclude paths list.
    source code
     
    _getRelativeExcludePaths(self)
    Property target used to get the relative exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path of the directory to collect.
      collectMode
    Overridden collect mode for this directory.
      archiveMode
    Overridden archive mode for this directory.
      ignoreFile
    Overridden ignore file name for this directory.
      linkDepth
    Maximum at which soft links should be followed.
      dereference
    Whether to dereference links that are followed.
      absoluteExcludePaths
    List of absolute paths to exclude.
      relativeExcludePaths
    List of relative paths to exclude.
      excludePatterns
    List of regular expression patterns to exclude.
      recursionLevel
    Recursion level to use for recursive directory collection

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, linkDepth=None, dereference=False, recursionLevel=None)
    (Constructor)

    source code 

    Constructor for the CollectDir class.

    Parameters:
    • absolutePath - Absolute path of the directory to collect.
    • collectMode - Overridden collect mode for this directory.
    • archiveMode - Overridden archive mode for this directory.
    • ignoreFile - Overidden ignore file name for this directory.
    • linkDepth - Maximum at which soft links should be followed.
    • dereference - Whether to dereference links that are followed.
    • absoluteExcludePaths - List of absolute paths to exclude.
    • relativeExcludePaths - List of relative paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setArchiveMode(self, value)

    source code 

    Property target used to set the archive mode. If not None, the mode must be one of the values in VALID_ARCHIVE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setIgnoreFile(self, value)

    source code 

    Property target used to set the ignore file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setLinkDepth(self, value)

    source code 

    Property target used to set the link depth. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setDereference(self, value)

    source code 

    Property target used to set the dereference flag. No validations, but we normalize the value to True or False.

    _setRecursionLevel(self, value)

    source code 

    Property target used to set the recursionLevel. The value must be an integer.

    Raises:
    • ValueError - If the value is not valid.

    _setAbsoluteExcludePaths(self, value)

    source code 

    Property target used to set the absolute exclude paths list. Either the value must be None or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setRelativeExcludePaths(self, value)

    source code 

    Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    absolutePath

    Absolute path of the directory to collect.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this directory.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    archiveMode

    Overridden archive mode for this directory.

    Get Method:
    _getArchiveMode(self) - Property target used to get the archive mode.
    Set Method:
    _setArchiveMode(self, value) - Property target used to set the archive mode.

    ignoreFile

    Overridden ignore file name for this directory.

    Get Method:
    _getIgnoreFile(self) - Property target used to get the ignore file.
    Set Method:
    _setIgnoreFile(self, value) - Property target used to set the ignore file.

    linkDepth

    Maximum at which soft links should be followed.

    Get Method:
    _getLinkDepth(self) - Property target used to get the action linkDepth.
    Set Method:
    _setLinkDepth(self, value) - Property target used to set the link depth.

    dereference

    Whether to dereference links that are followed.

    Get Method:
    _getDereference(self) - Property target used to get the dereference flag.
    Set Method:
    _setDereference(self, value) - Property target used to set the dereference flag.

    absoluteExcludePaths

    List of absolute paths to exclude.

    Get Method:
    _getAbsoluteExcludePaths(self) - Property target used to get the absolute exclude paths list.
    Set Method:
    _setAbsoluteExcludePaths(self, value) - Property target used to set the absolute exclude paths list.

    relativeExcludePaths

    List of relative paths to exclude.

    Get Method:
    _getRelativeExcludePaths(self) - Property target used to get the relative exclude paths list.
    Set Method:
    _setRelativeExcludePaths(self, value) - Property target used to set the relative exclude paths list.

    excludePatterns

    List of regular expression patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    recursionLevel

    Recursion level to use for recursive directory collection

    Get Method:
    _getRecursionLevel(self) - Property target used to get the action recursionLevel.
    Set Method:
    _setRecursionLevel(self, value) - Property target used to set the recursionLevel.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util._Vertex-class.html0000664000175000017500000002145312642035644027104 0ustar pronovicpronovic00000000000000 CedarBackup2.util._Vertex
    Package CedarBackup2 :: Module util :: Class _Vertex
    [hide private]
    [frames] | no frames]

    Class _Vertex

    source code

    object --+
             |
            _Vertex
    

    Represents a vertex (or node) in a directed graph.

    Instance Methods [hide private]
     
    __init__(self, name)
    Constructor.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name)
    (Constructor)

    source code 

    Constructor.

    Parameters:
    • name (String value.) - Name of this graph vertex.
    Overrides: object.__init__

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.knapsack-module.html0000664000175000017500000000301312642035643027241 0ustar pronovicpronovic00000000000000 knapsack

    Module knapsack


    Functions

    alternateFit
    bestFit
    firstFit
    worstFit

    Variables

    __package__

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.util-module.html0000664000175000017500000025476612642035643025667 0ustar pronovicpronovic00000000000000 CedarBackup2.util
    Package CedarBackup2 :: Module util
    [hide private]
    [frames] | no frames]

    Module util

    source code

    Provides general-purpose utilities.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      AbsolutePathList
    Class representing a list of absolute paths.
      ObjectTypeList
    Class representing a list containing only objects with a certain type.
      RestrictedContentList
    Class representing a list containing only object with certain values.
      RegexMatchList
    Class representing a list containing only strings that match a regular expression.
      RegexList
    Class representing a list of valid regular expression strings.
      _Vertex
    Represents a vertex (or node) in a directed graph.
      DirectedGraph
    Represents a directed graph.
      PathResolverSingleton
    Singleton used for resolving executable paths.
      UnorderedList
    Class representing an "unordered list".
      Pipe
    Specialized pipe class for use by executeCommand.
      Diagnostics
    Class holding runtime diagnostic information.
    Functions [hide private]
     
    sortDict(d)
    Returns the keys of the dictionary sorted by value.
    source code
     
    convertSize(size, fromUnit, toUnit)
    Converts a size in one unit to a size in another unit.
    source code
     
    getUidGid(user, group)
    Get the uid/gid associated with a user/group pair
    source code
     
    changeOwnership(path, user, group)
    Changes ownership of path to match the user and group.
    source code
     
    splitCommandLine(commandLine)
    Splits a command line string into a list of arguments.
    source code
     
    resolveCommand(command)
    Resolves the real path to a command through the path resolver mechanism.
    source code
     
    executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None)
    Executes a shell command, hopefully in a safe way.
    source code
     
    calculateFileAge(path)
    Calculates the age (in days) of a file.
    source code
     
    encodePath(path)
    Safely encodes a filesystem path.
    source code
     
    nullDevice()
    Attempts to portably return the null device on this system.
    source code
     
    deriveDayOfWeek(dayName)
    Converts English day name to numeric day of week as from time.localtime.
    source code
     
    isStartOfWeek(startingDay)
    Indicates whether "today" is the backup starting day per configuration.
    source code
     
    buildNormalizedPath(path)
    Returns a "normalized" path based on a path name.
    source code
     
    removeKeys(d, keys)
    Removes all of the keys from the dictionary.
    source code
     
    displayBytes(bytes, digits=2)
    Format a byte quantity so it can be sensibly displayed.
    source code
     
    getFunctionReference(module, function)
    Gets a reference to a named function.
    source code
     
    isRunningAsRoot()
    Indicates whether the program is running as the root user.
    source code
     
    mount(devicePath, mountPoint, fsType)
    Mounts the indicated device at the indicated mount point.
    source code
     
    unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0)
    Unmounts whatever device is mounted at the indicated mount point.
    source code
     
    deviceMounted(devicePath)
    Indicates whether a specific filesystem device is currently mounted.
    source code
     
    sanitizeEnvironment()
    Sanitizes the operating system environment.
    source code
     
    dereferenceLink(path, absolute=True)
    Deference a soft link, optionally normalizing it to an absolute path.
    source code
     
    checkUnique(prefix, values)
    Checks that all values are unique.
    source code
     
    parseCommaSeparatedString(commaString)
    Parses a list of values out of a comma-separated string.
    source code
    Variables [hide private]
      ISO_SECTOR_SIZE = 2048.0
    Size of an ISO image sector, in bytes.
      BYTES_PER_SECTOR = 2048.0
    Number of bytes (B) per ISO sector.
      BYTES_PER_KBYTE = 1024.0
    Number of bytes (B) per kilobyte (kB).
      BYTES_PER_MBYTE = 1048576.0
    Number of bytes (B) per megabyte (MB).
      BYTES_PER_GBYTE = 1073741824.0
    Number of bytes (B) per megabyte (GB).
      KBYTES_PER_MBYTE = 1024.0
    Number of kilobytes (kB) per megabyte (MB).
      MBYTES_PER_GBYTE = 1024.0
    Number of megabytes (MB) per gigabyte (GB).
      SECONDS_PER_MINUTE = 60.0
    Number of seconds per minute.
      MINUTES_PER_HOUR = 60.0
    Number of minutes per hour.
      HOURS_PER_DAY = 24.0
    Number of hours per day.
      SECONDS_PER_DAY = 86400.0
    Number of seconds per day.
      UNIT_BYTES = 0
    Constant representing the byte (B) unit for conversion.
      UNIT_KBYTES = 1
    Constant representing the kilobyte (kB) unit for conversion.
      UNIT_MBYTES = 2
    Constant representing the megabyte (MB) unit for conversion.
      UNIT_GBYTES = 4
    Constant representing the gigabyte (GB) unit for conversion.
      UNIT_SECTORS = 3
    Constant representing the ISO sector unit for conversion.
      _UID_GID_AVAILABLE = True
      logger = logging.getLogger("CedarBackup2.log.util")
      outputLogger = logging.getLogger("CedarBackup2.output")
      MTAB_FILE = '/etc/mtab'
      MOUNT_COMMAND = ['mount']
      UMOUNT_COMMAND = ['umount']
      DEFAULT_LANGUAGE = 'C'
      LANG_VAR = 'LANG'
      LOCALE_VARS = ['LC_ADDRESS', 'LC_ALL', 'LC_COLLATE', 'LC_CTYPE...
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    sortDict(d)

    source code 

    Returns the keys of the dictionary sorted by value.

    There are cuter ways to do this in Python 2.4, but we were originally attempting to stay compatible with Python 2.3.

    Parameters:
    • d - Dictionary to operate on
    Returns:
    List of dictionary keys sorted in order by dictionary value.

    convertSize(size, fromUnit, toUnit)

    source code 

    Converts a size in one unit to a size in another unit.

    This is just a convenience function so that the functionality can be implemented in just one place. Internally, we convert values to bytes and then to the final unit.

    The available units are:

    • UNIT_BYTES - Bytes
    • UNIT_KBYTES - Kilobytes, where 1 kB = 1024 B
    • UNIT_MBYTES - Megabytes, where 1 MB = 1024 kB
    • UNIT_GBYTES - Gigabytes, where 1 GB = 1024 MB
    • UNIT_SECTORS - Sectors, where 1 sector = 2048 B
    Parameters:
    • size (Integer or float value in units of fromUnit) - Size to convert
    • fromUnit (One of the units listed above) - Unit to convert from
    • toUnit (One of the units listed above) - Unit to convert to
    Returns:
    Number converted to new unit, as a float.
    Raises:
    • ValueError - If one of the units is invalid.

    getUidGid(user, group)

    source code 

    Get the uid/gid associated with a user/group pair

    This is a no-op if user/group functionality is not available on the platform.

    Parameters:
    • user (User name as a string) - User name
    • group (Group name as a string) - Group name
    Returns:
    Tuple (uid, gid) matching passed-in user and group.
    Raises:
    • ValueError - If the ownership user/group values are invalid

    changeOwnership(path, user, group)

    source code 

    Changes ownership of path to match the user and group.

    This is a no-op if user/group functionality is not available on the platform, or if the either passed-in user or group is None. Further, we won't even try to do it unless running as root, since it's unlikely to work.

    Parameters:
    • path - Path whose ownership to change.
    • user - User which owns file.
    • group - Group which owns file.

    splitCommandLine(commandLine)

    source code 

    Splits a command line string into a list of arguments.

    Unfortunately, there is no "standard" way to parse a command line string, and it's actually not an easy problem to solve portably (essentially, we have to emulate the shell argument-processing logic). This code only respects double quotes (") for grouping arguments, not single quotes ('). Make sure you take this into account when building your command line.

    Incidentally, I found this particular parsing method while digging around in Google Groups, and I tweaked it for my own use.

    Parameters:
    • commandLine (String, i.e. "cback --verbose stage store") - Command line string
    Returns:
    List of arguments, suitable for passing to popen2.
    Raises:
    • ValueError - If the command line is None.

    resolveCommand(command)

    source code 

    Resolves the real path to a command through the path resolver mechanism.

    Both extensions and standard Cedar Backup functionality need a way to resolve the "real" location of various executables. Normally, they assume that these executables are on the system path, but some callers need to specify an alternate location.

    Ideally, we want to handle this configuration in a central location. The Cedar Backup path resolver mechanism (a singleton called PathResolverSingleton) provides the central location to store the mappings. This function wraps access to the singleton, and is what all functions (extensions or standard functionality) should call if they need to find a command.

    The passed-in command must actually be a list, in the standard form used by all existing Cedar Backup code (something like ["svnlook", ]). The lookup will actually be done on the first element in the list, and the returned command will always be in list form as well.

    If the passed-in command can't be resolved or no mapping exists, then the command itself will be returned unchanged. This way, we neatly fall back on default behavior if we have no sensible alternative.

    Parameters:
    • command (List form of command, i.e. ["svnlook", ].) - Command to resolve.
    Returns:
    Path to command or just command itself if no mapping exists.

    executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None)

    source code 

    Executes a shell command, hopefully in a safe way.

    This function exists to replace direct calls to os.popen in the Cedar Backup code. It's not safe to call a function such as os.popen() with untrusted arguments, since that can cause problems if the string contains non-safe variables or other constructs (imagine that the argument is $WHATEVER, but $WHATEVER contains something like "; rm -fR ~/; echo" in the current environment).

    Instead, it's safer to pass a list of arguments in the style supported bt popen2 or popen4. This function actually uses a specialized Pipe class implemented using either subprocess.Popen or popen2.Popen4.

    Under the normal case, this function will return a tuple of (status, None) where the status is the wait-encoded return status of the call per the popen2.Popen4 documentation. If returnOutput is passed in as True, the function will return a tuple of (status, output) where output is a list of strings, one entry per line in the output from the command. Output is always logged to the outputLogger.info() target, regardless of whether it's returned.

    By default, stdout and stderr will be intermingled in the output. However, if you pass in ignoreStderr=True, then only stdout will be included in the output.

    The doNotLog parameter exists so that callers can force the function to not log command output to the debug log. Normally, you would want to log. However, if you're using this function to write huge output files (i.e. database backups written to stdout) then you might want to avoid putting all that information into the debug log.

    The outputFile parameter exists to make it easier for a caller to push output into a file, i.e. as a substitute for redirection to a file. If this value is passed in, each time a line of output is generated, it will be written to the file using outputFile.write(). At the end, the file descriptor will be flushed using outputFile.flush(). The caller maintains responsibility for closing the file object appropriately.

    Parameters:
    • command (List of individual arguments that make up the command) - Shell command to execute
    • args (List of additional arguments to the command) - List of arguments to the command
    • returnOutput (Boolean True or False) - Indicates whether to return the output of the command
    • ignoreStderr (Boolean True or False) - Whether stderr should be discarded
    • doNotLog (Boolean True or False) - Indicates that output should not be logged.
    • outputFile (File object as returned from open() or file().) - File object that all output should be written to.
    Returns:
    Tuple of (result, output) as described above.
    Notes:
    • I know that it's a bit confusing that the command and the arguments are both lists. I could have just required the caller to pass in one big list. However, I think it makes some sense to keep the command (the constant part of what we're executing, i.e. "scp -B") separate from its arguments, even if they both end up looking kind of similar.
    • You cannot redirect output via shell constructs (i.e. >file, 2>/dev/null, etc.) using this function. The redirection string would be passed to the command just like any other argument. However, you can implement the equivalent to redirection using ignoreStderr and outputFile, as discussed above.
    • The operating system environment is partially sanitized before the command is invoked. See sanitizeEnvironment for details.

    calculateFileAge(path)

    source code 

    Calculates the age (in days) of a file.

    The "age" of a file is the amount of time since the file was last used, per the most recent of the file's st_atime and st_mtime values.

    Technically, we only intend this function to work with files, but it will probably work with anything on the filesystem.

    Parameters:
    • path - Path to a file on disk.
    Returns:
    Age of the file in days (possibly fractional).
    Raises:
    • OSError - If the file doesn't exist.

    encodePath(path)

    source code 

    Safely encodes a filesystem path.

    Many Python filesystem functions, such as os.listdir, behave differently if they are passed unicode arguments versus simple string arguments. For instance, os.listdir generally returns unicode path names if it is passed a unicode argument, and string pathnames if it is passed a string argument.

    However, this behavior often isn't as consistent as we might like. As an example, os.listdir "gives up" if it finds a filename that it can't properly encode given the current locale settings. This means that the returned list is a mixed set of unicode and simple string paths. This has consequences later, because other filesystem functions like os.path.join will blow up if they are given one string path and one unicode path.

    On comp.lang.python, Martin v. Löwis explained the os.listdir behavior like this:

      The operating system (POSIX) does not have the inherent notion that file
      names are character strings. Instead, in POSIX, file names are primarily
      byte strings. There are some bytes which are interpreted as characters
      (e.g. '\x2e', which is '.', or '\x2f', which is '/'), but apart from
      that, most OS layers think these are just bytes.
    
      Now, most *people* think that file names are character strings.  To
      interpret a file name as a character string, you need to know what the
      encoding is to interpret the file names (which are byte strings) as
      character strings.
    
      There is, unfortunately, no operating system API to carry the notion of a
      file system encoding. By convention, the locale settings should be used
      to establish this encoding, in particular the LC_CTYPE facet of the
      locale. This is defined in the environment variables LC_CTYPE, LC_ALL,
      and LANG (searched in this order).
    
      If LANG is not set, the "C" locale is assumed, which uses ASCII as its
      file system encoding. In this locale, '\xe2\x99\xaa\xe2\x99\xac' is not a
      valid file name (at least it cannot be interpreted as characters, and
      hence not be converted to Unicode).
    
      Now, your Python script has requested that all file names *should* be
      returned as character (ie. Unicode) strings, but Python cannot comply,
      since there is no way to find out what this byte string means, in terms
      of characters.
    
      So we have three options:
    
      1. Skip this string, only return the ones that can be converted to Unicode.
         Give the user the impression the file does not exist.
      2. Return the string as a byte string
      3. Refuse to listdir altogether, raising an exception (i.e. return nothing)
    
      Python has chosen alternative 2, allowing the application to implement 1
      or 3 on top of that if it wants to (or come up with other strategies,
      such as user feedback).
    

    As a solution, he suggests that rather than passing unicode paths into the filesystem functions, that I should sensibly encode the path first. That is what this function accomplishes. Any function which takes a filesystem path as an argument should encode it first, before using it for any other purpose.

    I confess I still don't completely understand how this works. On a system with filesystem encoding "ISO-8859-1", a path u"\xe2\x99\xaa\xe2\x99\xac" is converted into the string "\xe2\x99\xaa\xe2\x99\xac". However, on a system with a "utf-8" encoding, the result is a completely different string: "\xc3\xa2\xc2\x99\xc2\xaa\xc3\xa2\xc2\x99\xc2\xac". A quick test where I write to the first filename and open the second proves that the two strings represent the same file on disk, which is all I really care about.

    Parameters:
    • path - Path to encode
    Returns:
    Path, as a string, encoded appropriately
    Raises:
    • ValueError - If the path cannot be encoded properly.
    Notes:
    • As a special case, if path is None, then this function will return None.
    • To provide several examples of encoding values, my Debian sarge box with an ext3 filesystem has Python filesystem encoding ISO-8859-1. User Anarcat's Debian box with a xfs filesystem has filesystem encoding ANSI_X3.4-1968. Both my iBook G4 running Mac OS X 10.4 and user Dag Rende's SuSE 9.3 box both have filesystem encoding UTF-8.
    • Just because a filesystem has UTF-8 encoding doesn't mean that it will be able to handle all extended-character filenames. For instance, certain extended-character (but not UTF-8) filenames -- like the ones in the regression test tar file test/data/tree13.tar.gz -- are not valid under Mac OS X, and it's not even possible to extract them from the tarfile on that platform.

    nullDevice()

    source code 

    Attempts to portably return the null device on this system.

    The null device is something like /dev/null on a UNIX system. The name varies on other platforms.

    deriveDayOfWeek(dayName)

    source code 

    Converts English day name to numeric day of week as from time.localtime.

    For instance, the day monday would be converted to the number 0.

    Parameters:
    • dayName (string, i.e. "monday", "tuesday", etc.) - Day of week to convert
    Returns:
    Integer, where Monday is 0 and Sunday is 6; or -1 if no conversion is possible.

    isStartOfWeek(startingDay)

    source code 

    Indicates whether "today" is the backup starting day per configuration.

    If the current day's English name matches the indicated starting day, then today is a starting day.

    Parameters:
    • startingDay (string, i.e. "monday", "tuesday", etc.) - Configured starting day.
    Returns:
    Boolean indicating whether today is the starting day.

    buildNormalizedPath(path)

    source code 

    Returns a "normalized" path based on a path name.

    A normalized path is a representation of a path that is also a valid file name. To make a valid file name out of a complete path, we have to convert or remove some characters that are significant to the filesystem -- in particular, the path separator and any leading '.' character (which would cause the file to be hidden in a file listing).

    Note that this is a one-way transformation -- you can't safely derive the original path from the normalized path.

    To normalize a path, we begin by looking at the first character. If the first character is '/' or '\', it gets removed. If the first character is '.', it gets converted to '_'. Then, we look through the rest of the path and convert all remaining '/' or '\' characters '-', and all remaining whitespace characters to '_'.

    As a special case, a path consisting only of a single '/' or '\' character will be converted to '-'.

    Parameters:
    • path - Path to normalize
    Returns:
    Normalized path as described above.
    Raises:
    • ValueError - If the path is None

    removeKeys(d, keys)

    source code 

    Removes all of the keys from the dictionary. The dictionary is altered in-place. Each key must exist in the dictionary.

    Parameters:
    • d - Dictionary to operate on
    • keys - List of keys to remove
    Raises:
    • KeyError - If one of the keys does not exist

    displayBytes(bytes, digits=2)

    source code 

    Format a byte quantity so it can be sensibly displayed.

    It's rather difficult to look at a number like "72372224 bytes" and get any meaningful information out of it. It would be more useful to see something like "69.02 MB". That's what this function does. Any time you want to display a byte value, i.e.:

      print "Size: %s bytes" % bytes
    

    Call this function instead:

      print "Size: %s" % displayBytes(bytes)
    

    What comes out will be sensibly formatted. The indicated number of digits will be listed after the decimal point, rounded based on whatever rules are used by Python's standard %f string format specifier. (Values less than 1 kB will be listed in bytes and will not have a decimal point, since the concept of a fractional byte is nonsensical.)

    Parameters:
    • bytes (Integer number of bytes.) - Byte quantity.
    • digits (Integer value, typically 2-5.) - Number of digits to display after the decimal point.
    Returns:
    String, formatted for sensible display.

    getFunctionReference(module, function)

    source code 

    Gets a reference to a named function.

    This does some hokey-pokey to get back a reference to a dynamically named function. For instance, say you wanted to get a reference to the os.path.isdir function. You could use:

      myfunc = getFunctionReference("os.path", "isdir")
    

    Although we won't bomb out directly, behavior is pretty much undefined if you pass in None or "" for either module or function.

    The only validation we enforce is that whatever we get back must be callable.

    I derived this code based on the internals of the Python unittest implementation. I don't claim to completely understand how it works.

    Parameters:
    • module (Something like "os.path" or "CedarBackup2.util") - Name of module associated with function.
    • function (Something like "isdir" or "getUidGid") - Name of function
    Returns:
    Reference to function associated with name.
    Raises:
    • ImportError - If the function cannot be found.
    • ValueError - If the resulting reference is not callable.

    Copyright: Some of this code, prior to customization, was originally part of the Python 2.3 codebase. Python code is copyright (c) 2001, 2002 Python Software Foundation; All Rights Reserved.

    mount(devicePath, mountPoint, fsType)

    source code 

    Mounts the indicated device at the indicated mount point.

    For instance, to mount a CD, you might use device path /dev/cdrw, mount point /media/cdrw and filesystem type iso9660. You can safely use any filesystem type that is supported by mount on your platform. If the type is None, we'll attempt to let mount auto-detect it. This may or may not work on all systems.

    Parameters:
    • devicePath - Path of device to be mounted.
    • mountPoint - Path that device should be mounted at.
    • fsType - Type of the filesystem assumed to be available via the device.
    Raises:
    • IOError - If the device cannot be mounted.

    Note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line "mount" command, like UNIXes. It won't work on Windows.

    unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0)

    source code 

    Unmounts whatever device is mounted at the indicated mount point.

    Sometimes, it might not be possible to unmount the mount point immediately, if there are still files open there. Use the attempts and waitSeconds arguments to indicate how many unmount attempts to make and how many seconds to wait between attempts. If you pass in zero attempts, no attempts will be made (duh).

    If the indicated mount point is not really a mount point per os.path.ismount(), then it will be ignored. This seems to be a safer check then looking through /etc/mtab, since ismount() is already in the Python standard library and is documented as working on all POSIX systems.

    If removeAfter is True, then the mount point will be removed using os.rmdir() after the unmount action succeeds. If for some reason the mount point is not a directory, then it will not be removed.

    Parameters:
    • mountPoint - Mount point to be unmounted.
    • removeAfter - Remove the mount point after unmounting it.
    • attempts - Number of times to attempt the unmount.
    • waitSeconds - Number of seconds to wait between repeated attempts.
    Raises:
    • IOError - If the mount point is still mounted after attempts are exhausted.

    Note: This only works on platforms that have a concept of "mounting" a filesystem through a command-line "mount" command, like UNIXes. It won't work on Windows.

    deviceMounted(devicePath)

    source code 

    Indicates whether a specific filesystem device is currently mounted.

    We determine whether the device is mounted by looking through the system's mtab file. This file shows every currently-mounted filesystem, ordered by device. We only do the check if the mtab file exists and is readable. Otherwise, we assume that the device is not mounted.

    Parameters:
    • devicePath - Path of device to be checked
    Returns:
    True if device is mounted, false otherwise.

    Note: This only works on platforms that have a concept of an mtab file to show mounted volumes, like UNIXes. It won't work on Windows.

    sanitizeEnvironment()

    source code 

    Sanitizes the operating system environment.

    The operating system environment is contained in os.environ. This method sanitizes the contents of that dictionary.

    Currently, all it does is reset the locale (removing $LC_*) and set the default language ($LANG) to DEFAULT_LANGUAGE. This way, we can count on consistent localization regardless of what the end-user has configured. This is important for code that needs to parse program output.

    The os.environ dictionary is modifed in-place. If $LANG is already set to the proper value, it is not re-set, so we can avoid the memory leaks that are documented to occur on BSD-based systems.

    Returns:
    Copy of the sanitized environment.

    dereferenceLink(path, absolute=True)

    source code 

    Deference a soft link, optionally normalizing it to an absolute path.

    Parameters:
    • path - Path of link to dereference
    • absolute - Whether to normalize the result to an absolute path
    Returns:
    Dereferenced path, or original path if original is not a link.

    checkUnique(prefix, values)

    source code 

    Checks that all values are unique.

    The values list is checked for duplicate values. If there are duplicates, an exception is thrown. All duplicate values are listed in the exception.

    Parameters:
    • prefix - Prefix to use in the thrown exception
    • values - List of values to check
    Raises:
    • ValueError - If there are duplicates in the list

    parseCommaSeparatedString(commaString)

    source code 

    Parses a list of values out of a comma-separated string.

    The items in the list are split by comma, and then have whitespace stripped. As a special case, if commaString is None, then None will be returned.

    Parameters:
    • commaString - List of values in comma-separated string format.
    Returns:
    Values from commaString split into a list, or None.

    Variables Details [hide private]

    LOCALE_VARS

    Value:
    ['LC_ADDRESS',
     'LC_ALL',
     'LC_COLLATE',
     'LC_CTYPE',
     'LC_IDENTIFICATION',
     'LC_MEASUREMENT',
     'LC_MESSAGES',
     'LC_MONETARY',
    ...
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.stage-module.html0000664000175000017500000006404212642035643027436 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.stage
    Package CedarBackup2 :: Package actions :: Module stage
    [hide private]
    [frames] | no frames]

    Module stage

    source code

    Implements the standard 'stage' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeStage(configPath, options, config)
    Executes the stage backup action.
    source code
     
    _createStagingDirs(config, dailyDir, peers)
    Creates staging directories as required.
    source code
     
    _getIgnoreFailuresFlag(options, config, peer)
    Gets the ignore failures flag based on options, configuration, and peer.
    source code
     
    _getDailyDir(config)
    Gets the daily staging directory.
    source code
     
    _getLocalPeers(config)
    Return a list of LocalPeer objects based on configuration.
    source code
     
    _getRemotePeers(config)
    Return a list of RemotePeer objects based on configuration.
    source code
     
    _getRemoteUser(config, remotePeer)
    Gets the remote user associated with a remote peer.
    source code
     
    _getLocalUser(config)
    Gets the remote user associated with a remote peer.
    source code
     
    _getRcpCommand(config, remotePeer)
    Gets the RCP command associated with a remote peer.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.stage")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeStage(configPath, options, config)

    source code 

    Executes the stage backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are problems reading or writing files.
    Notes:
    • The daily directory is derived once and then we stick with it, just in case a backup happens to span midnite.
    • As portions of the stage action is complete, we will write various indicator files so that it's obvious what actions have been completed. Each peer gets a stage indicator in its collect directory, and then the master gets a stage indicator in its daily staging directory. The store process uses the master's stage indicator to decide whether a directory is ready to be stored. Currently, nothing uses the indicator at each peer, and it exists for reference only.

    _createStagingDirs(config, dailyDir, peers)

    source code 

    Creates staging directories as required.

    The main staging directory is the passed in daily directory, something like staging/2002/05/23. Then, individual peers get their own directories, i.e. staging/2002/05/23/host.

    Parameters:
    • config - Config object.
    • dailyDir - Daily staging directory.
    • peers - List of all configured peers.
    Returns:
    Dictionary mapping peer name to staging directory.

    _getIgnoreFailuresFlag(options, config, peer)

    source code 

    Gets the ignore failures flag based on options, configuration, and peer.

    Parameters:
    • options - Options object
    • config - Configuration object
    • peer - Peer to check
    Returns:
    Whether to ignore stage failures for this peer

    _getDailyDir(config)

    source code 

    Gets the daily staging directory.

    This is just a directory in the form staging/YYYY/MM/DD, i.e. staging/2000/10/07, except it will be an absolute path based on config.stage.targetDir.

    Parameters:
    • config - Config object
    Returns:
    Path of daily staging directory.

    _getLocalPeers(config)

    source code 

    Return a list of LocalPeer objects based on configuration.

    Parameters:
    • config - Config object.
    Returns:
    List of LocalPeer objects.

    _getRemotePeers(config)

    source code 

    Return a list of RemotePeer objects based on configuration.

    Parameters:
    • config - Config object.
    Returns:
    List of RemotePeer objects.

    _getRemoteUser(config, remotePeer)

    source code 

    Gets the remote user associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • config - Config object.
    • remotePeer - Configuration-style remote peer object.
    Returns:
    Name of remote user associated with remote peer.

    _getLocalUser(config)

    source code 

    Gets the remote user associated with a remote peer.

    Parameters:
    • config - Config object.
    Returns:
    Name of local user that should be used

    _getRcpCommand(config, remotePeer)

    source code 

    Gets the RCP command associated with a remote peer. Use peer's if possible, otherwise take from options section.

    Parameters:
    • config - Config object.
    • remotePeer - Configuration-style remote peer object.
    Returns:
    RCP command associated with remote peer.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend.split-module.html0000664000175000017500000000411412642035643030072 0ustar pronovicpronovic00000000000000 split

    Module split


    Classes

    LocalConfig
    SplitConfig

    Functions

    executeAction

    Variables

    SPLIT_COMMAND
    SPLIT_INDICATOR
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend-pysrc.html0000664000175000017500000002366012642035644026040 0ustar pronovicpronovic00000000000000 CedarBackup2.extend
    Package CedarBackup2 :: Package extend
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2.extend

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Official Cedar Backup Extensions 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Official Cedar Backup Extensions 
    24   
    25  This package provides official Cedar Backup extensions.  These are Cedar Backup 
    26  actions that are not part of the "standard" set of Cedar Backup actions, but 
    27  are officially supported along with Cedar Backup. 
    28   
    29  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    30  """ 
    31   
    32   
    33  ######################################################################## 
    34  # Package initialization 
    35  ######################################################################## 
    36   
    37  # Using 'from CedarBackup2.extend import *' will just import the modules listed 
    38  # in the __all__ variable. 
    39   
    40  __all__ = [ 'amazons3', 'encrypt', 'mbox', 'mysql', 'postgresql', 'split', 'subversion', 'sysinfo', ] 
    41   
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.DirectedGraph-class.html0000664000175000017500000006573612642035644030211 0ustar pronovicpronovic00000000000000 CedarBackup2.util.DirectedGraph
    Package CedarBackup2 :: Module util :: Class DirectedGraph
    [hide private]
    [frames] | no frames]

    Class DirectedGraph

    source code

    object --+
             |
            DirectedGraph
    

    Represents a directed graph.

    A graph G=(V,E) consists of a set of vertices V together with a set E of vertex pairs or edges. In a directed graph, each edge also has an associated direction (from vertext v1 to vertex v2). A DirectedGraph object provides a way to construct a directed graph and execute a depth- first search.

    This data structure was designed based on the graphing chapter in The Algorithm Design Manual, by Steven S. Skiena.

    This class is intended to be used by Cedar Backup for dependency ordering. Because of this, it's not quite general-purpose. Unlike a "general" graph, every vertex in this graph has at least one edge pointing to it, from a special "start" vertex. This is so no vertices get "lost" either because they have no dependencies or because nothing depends on them.

    Instance Methods [hide private]
     
    __init__(self, name)
    Directed graph constructor.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _getName(self)
    Property target used to get the graph name.
    source code
     
    createVertex(self, name)
    Creates a named vertex.
    source code
     
    createEdge(self, start, finish)
    Adds an edge with an associated direction, from start vertex to finish vertex.
    source code
     
    topologicalSort(self)
    Implements a topological sort of the graph.
    source code
     
    _topologicalSort(self, vertex, ordering)
    Recursive depth first search function implementing topological sort.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Class Variables [hide private]
      _UNDISCOVERED = 0
      _DISCOVERED = 1
      _EXPLORED = 2
    Properties [hide private]
      name
    Name of the graph.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name)
    (Constructor)

    source code 

    Directed graph constructor.

    Parameters:
    • name (String value.) - Name of this graph.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    createVertex(self, name)

    source code 

    Creates a named vertex.

    Parameters:
    • name - vertex name
    Raises:
    • ValueError - If the vertex name is None or empty.

    createEdge(self, start, finish)

    source code 

    Adds an edge with an associated direction, from start vertex to finish vertex.

    Parameters:
    • start - Name of start vertex.
    • finish - Name of finish vertex.
    Raises:
    • ValueError - If one of the named vertices is unknown.

    topologicalSort(self)

    source code 

    Implements a topological sort of the graph.

    This method also enforces that the graph is a directed acyclic graph, which is a requirement of a topological sort.

    A directed acyclic graph (or "DAG") is a directed graph with no directed cycles. A topological sort of a DAG is an ordering on the vertices such that all edges go from left to right. Only an acyclic graph can have a topological sort, but any DAG has at least one topological sort.

    Since a topological sort only makes sense for an acyclic graph, this method throws an exception if a cycle is found.

    A depth-first search only makes sense if the graph is acyclic. If the graph contains any cycles, it is not possible to determine a consistent ordering for the vertices.

    Returns:
    Ordering on the vertices so that all edges go from left to right.
    Raises:
    • ValueError - If a cycle is found in the graph.

    Note: If a particular vertex has no edges, then its position in the final list depends on the order in which the vertices were created in the graph. If you're using this method to determine a dependency order, this makes sense: a vertex with no dependencies can go anywhere (and will).

    _topologicalSort(self, vertex, ordering)

    source code 

    Recursive depth first search function implementing topological sort.

    Parameters:
    • vertex - Vertex to search
    • ordering - List of vertices in proper order

    Property Details [hide private]

    name

    Name of the graph.

    Get Method:
    _getName(self) - Property target used to get the graph name.

    CedarBackup2-2.26.5/doc/interface/toc.html0000664000175000017500000002370312642035643021750 0ustar pronovicpronovic00000000000000 Table of Contents

    Table of Contents


    Everything

    Modules

    CedarBackup2
    CedarBackup2.action
    CedarBackup2.actions
    CedarBackup2.actions.collect
    CedarBackup2.actions.constants
    CedarBackup2.actions.initialize
    CedarBackup2.actions.purge
    CedarBackup2.actions.rebuild
    CedarBackup2.actions.stage
    CedarBackup2.actions.store
    CedarBackup2.actions.util
    CedarBackup2.actions.validate
    CedarBackup2.cli
    CedarBackup2.config
    CedarBackup2.customize
    CedarBackup2.extend
    CedarBackup2.extend.amazons3
    CedarBackup2.extend.capacity
    CedarBackup2.extend.encrypt
    CedarBackup2.extend.mbox
    CedarBackup2.extend.mysql
    CedarBackup2.extend.postgresql
    CedarBackup2.extend.split
    CedarBackup2.extend.subversion
    CedarBackup2.extend.sysinfo
    CedarBackup2.filesystem
    CedarBackup2.image
    CedarBackup2.knapsack
    CedarBackup2.peer
    CedarBackup2.release
    CedarBackup2.testutil
    CedarBackup2.tools
    CedarBackup2.tools.amazons3
    CedarBackup2.tools.span
    CedarBackup2.util
    CedarBackup2.writer
    CedarBackup2.writers
    CedarBackup2.writers.cdwriter
    CedarBackup2.writers.dvdwriter
    CedarBackup2.writers.util
    CedarBackup2.xmlutil

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.encrypt-module.html0000664000175000017500000005725512642035643027656 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.encrypt
    Package CedarBackup2 :: Package extend :: Module encrypt
    [hide private]
    [frames] | no frames]

    Module encrypt

    source code

    Provides an extension to encrypt staging directories.

    When this extension is executed, all backed-up files in the configured Cedar Backup staging directory will be encrypted using gpg. Any directory which has already been encrypted (as indicated by the cback.encrypt file) will be ignored.

    This extension requires a new configuration section <encrypt> and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      EncryptConfig
    Class representing encrypt configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the encrypt backup action.
    source code
     
    _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup)
    Encrypts the contents of a daily staging directory.
    source code
     
    _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False)
    Encrypts the source file using the indicated mode.
    source code
     
    _encryptFileWithGpg(sourcePath, recipient)
    Encrypts the indicated source file using GPG.
    source code
     
    _confirmGpgRecipient(recipient)
    Confirms that a recipient's public key is known to GPG.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.encrypt")
      GPG_COMMAND = ['gpg']
      VALID_ENCRYPT_MODES = ['gpg']
      ENCRYPT_INDICATOR = 'cback.encrypt'
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the encrypt backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup)

    source code 

    Encrypts the contents of a daily staging directory.

    Indicator files are ignored. All other files are encrypted. The only valid encrypt mode is "gpg".

    Parameters:
    • dailyDir - Daily directory to encrypt
    • encryptMode - Encryption mode (only "gpg" is allowed)
    • encryptTarget - Encryption target (GPG recipient for "gpg" mode)
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    Raises:
    • ValueError - If the encrypt mode is not supported.
    • ValueError - If the daily staging directory does not exist.

    _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False)

    source code 

    Encrypts the source file using the indicated mode.

    The encrypted file will be owned by the indicated backup user and group. If removeSource is True, then the source file will be removed after it is successfully encrypted.

    Currently, only the "gpg" encrypt mode is supported.

    Parameters:
    • sourcePath - Absolute path of the source file to encrypt
    • encryptMode - Encryption mode (only "gpg" is allowed)
    • encryptTarget - Encryption target (GPG recipient)
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    • removeSource - Indicates whether to remove the source file
    Returns:
    Path to the newly-created encrypted file.
    Raises:
    • ValueError - If an invalid encrypt mode is passed in.
    • IOError - If there is a problem accessing, encrypting or removing the source file.

    _encryptFileWithGpg(sourcePath, recipient)

    source code 

    Encrypts the indicated source file using GPG.

    The encrypted file will be in GPG's binary output format and will have the same name as the source file plus a ".gpg" extension. The source file will not be modified or removed by this function call.

    Parameters:
    • sourcePath - Absolute path of file to be encrypted.
    • recipient - Recipient name to be passed to GPG's "-r" option
    Returns:
    Path to the newly-created encrypted file.
    Raises:
    • IOError - If there is a problem encrypting the file.

    _confirmGpgRecipient(recipient)

    source code 

    Confirms that a recipient's public key is known to GPG. Throws an exception if there is a problem, or returns normally otherwise.

    Parameters:
    • recipient - Recipient name
    Raises:
    • IOError - If the recipient's public key is not known to GPG.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions.collect-module.html0000664000175000017500000000665612642035643030552 0ustar pronovicpronovic00000000000000 collect

    Module collect


    Functions

    executeCollect

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.PurgeConfig-class.html0000664000175000017500000004621112642035644030167 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PurgeConfig
    Package CedarBackup2 :: Module config :: Class PurgeConfig
    [hide private]
    [frames] | no frames]

    Class PurgeConfig

    source code

    object --+
             |
            PurgeConfig
    

    Class representing a Cedar Backup purge configuration.

    The following restrictions exist on data in this class:

    • The purge directory list must be a list of PurgeDir objects.

    For the purgeDirs list, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element is a PurgeDir.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, purgeDirs=None)
    Constructor for the Purge class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setPurgeDirs(self, value)
    Property target used to set the purge dirs list.
    source code
     
    _getPurgeDirs(self)
    Property target used to get the purge dirs list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      purgeDirs
    List of directories to purge.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, purgeDirs=None)
    (Constructor)

    source code 

    Constructor for the Purge class.

    Parameters:
    • purgeDirs - List of purge directories.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setPurgeDirs(self, value)

    source code 

    Property target used to set the purge dirs list. Either the value must be None or each element must be a PurgeDir.

    Raises:
    • ValueError - If the value is not a PurgeDir

    Property Details [hide private]

    purgeDirs

    List of directories to purge.

    Get Method:
    _getPurgeDirs(self) - Property target used to get the purge dirs list.
    Set Method:
    _setPurgeDirs(self, value) - Property target used to set the purge dirs list.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.encrypt.LocalConfig-class.html0000664000175000017500000007463512642035644031657 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.encrypt.LocalConfig
    Package CedarBackup2 :: Package extend :: Module encrypt :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit encrypt-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds an <encrypt> configuration section as the next child of a parent.
    source code
     
    _setEncrypt(self, value)
    Property target used to set the encrypt configuration value.
    source code
     
    _getEncrypt(self)
    Property target used to get the encrypt configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseEncrypt(parent)
    Parses an encrypt configuration section.
    source code
    Properties [hide private]
      encrypt
    Encrypt configuration in terms of a EncryptConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Encrypt configuration must be filled in. Within that, both the encrypt mode and encrypt target must be filled in.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds an <encrypt> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      encryptMode    //cb_config/encrypt/encrypt_mode
      encryptTarget  //cb_config/encrypt/encrypt_target
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setEncrypt(self, value)

    source code 

    Property target used to set the encrypt configuration value. If not None, the value must be a EncryptConfig object.

    Raises:
    • ValueError - If the value is not a EncryptConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the encrypt configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseEncrypt(parent)
    Static Method

    source code 

    Parses an encrypt configuration section.

    We read the following individual fields:

      encryptMode    //cb_config/encrypt/encrypt_mode
      encryptTarget  //cb_config/encrypt/encrypt_target
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    EncryptConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    encrypt

    Encrypt configuration in terms of a EncryptConfig object.

    Get Method:
    _getEncrypt(self) - Property target used to get the encrypt configuration value.
    Set Method:
    _setEncrypt(self, value) - Property target used to set the encrypt configuration value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.encrypt.EncryptConfig-class.html0000664000175000017500000005465112642035644032245 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.encrypt.EncryptConfig
    Package CedarBackup2 :: Package extend :: Module encrypt :: Class EncryptConfig
    [hide private]
    [frames] | no frames]

    Class EncryptConfig

    source code

    object --+
             |
            EncryptConfig
    

    Class representing encrypt configuration.

    Encrypt configuration is used for encrypting staging directories.

    The following restrictions exist on data in this class:

    • The encrypt mode must be one of the values in VALID_ENCRYPT_MODES
    • The encrypt target value must be a non-empty string
    Instance Methods [hide private]
     
    __init__(self, encryptMode=None, encryptTarget=None)
    Constructor for the EncryptConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setEncryptMode(self, value)
    Property target used to set the encrypt mode.
    source code
     
    _getEncryptMode(self)
    Property target used to get the encrypt mode.
    source code
     
    _setEncryptTarget(self, value)
    Property target used to set the encrypt target.
    source code
     
    _getEncryptTarget(self)
    Property target used to get the encrypt target.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      encryptMode
    Encrypt mode.
      encryptTarget
    Encrypt target (i.e.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, encryptMode=None, encryptTarget=None)
    (Constructor)

    source code 

    Constructor for the EncryptConfig class.

    Parameters:
    • encryptMode - Encryption mode
    • encryptTarget - Encryption target (for instance, GPG recipient)
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setEncryptMode(self, value)

    source code 

    Property target used to set the encrypt mode. If not None, the mode must be one of the values in VALID_ENCRYPT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    encryptMode

    Encrypt mode.

    Get Method:
    _getEncryptMode(self) - Property target used to get the encrypt mode.
    Set Method:
    _setEncryptMode(self, value) - Property target used to set the encrypt mode.

    encryptTarget

    Encrypt target (i.e. GPG recipient).

    Get Method:
    _getEncryptTarget(self) - Property target used to get the encrypt target.
    Set Method:
    _setEncryptTarget(self, value) - Property target used to set the encrypt target.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.CollectFile-class.html0000664000175000017500000006703312642035643030150 0ustar pronovicpronovic00000000000000 CedarBackup2.config.CollectFile
    Package CedarBackup2 :: Module config :: Class CollectFile
    [hide private]
    [frames] | no frames]

    Class CollectFile

    source code

    object --+
             |
            CollectFile
    

    Class representing a Cedar Backup collect file.

    The following restrictions exist on data in this class:

    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, archiveMode=None)
    Constructor for the CollectFile class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setArchiveMode(self, value)
    Property target used to set the archive mode.
    source code
     
    _getArchiveMode(self)
    Property target used to get the archive mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path of the file to collect.
      collectMode
    Overridden collect mode for this file.
      archiveMode
    Overridden archive mode for this file.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, archiveMode=None)
    (Constructor)

    source code 

    Constructor for the CollectFile class.

    Parameters:
    • absolutePath - Absolute path of the file to collect.
    • collectMode - Overridden collect mode for this file.
    • archiveMode - Overridden archive mode for this file.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setArchiveMode(self, value)

    source code 

    Property target used to set the archive mode. If not None, the mode must be one of the values in VALID_ARCHIVE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    absolutePath

    Absolute path of the file to collect.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this file.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    archiveMode

    Overridden archive mode for this file.

    Get Method:
    _getArchiveMode(self) - Property target used to get the archive mode.
    Set Method:
    _setArchiveMode(self, value) - Property target used to set the archive mode.

    CedarBackup2-2.26.5/doc/interface/identifier-index.html0000664000175000017500000156067012642035643024424 0ustar pronovicpronovic00000000000000 Identifier Index
     
    [hide private]
    [frames] | no frames]

    Identifier Index

    [ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _ ]

    A

    B

    C

    D

    E

    F

    G

    H

    I

    K

    L

    M

    N

    O

    P

    Q

    R

    S

    T

    U

    V

    W

    X

    _



    CedarBackup2-2.26.5/doc/interface/CedarBackup2.customize-pysrc.html0000664000175000017500000006635212642035645026601 0ustar pronovicpronovic00000000000000 CedarBackup2.customize
    Package CedarBackup2 :: Module customize
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.customize

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Copyright (c) 2010 Kenneth J. Pronovici. 
    12  # All rights reserved. 
    13  # 
    14  # This program is free software; you can redistribute it and/or 
    15  # modify it under the terms of the GNU General Public License, 
    16  # Version 2, as published by the Free Software Foundation. 
    17  # 
    18  # This program is distributed in the hope that it will be useful, 
    19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
    20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
    21  # 
    22  # Copies of the GNU General Public License are available from 
    23  # the Free Software Foundation website, http://www.gnu.org/. 
    24  # 
    25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    26  # 
    27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    28  # Language : Python 2 (>= 2.7) 
    29  # Project  : Cedar Backup, release 2 
    30  # Purpose  : Implements customized behavior. 
    31  # 
    32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    33   
    34  ######################################################################## 
    35  # Module documentation 
    36  ######################################################################## 
    37   
    38  """ 
    39  Implements customized behavior. 
    40   
    41  Some behaviors need to vary when packaged for certain platforms.  For instance, 
    42  while Cedar Backup generally uses cdrecord and mkisofs, Debian ships compatible 
    43  utilities called wodim and genisoimage. I want there to be one single place 
    44  where Cedar Backup is patched for Debian, rather than having to maintain a 
    45  variety of patches in different places. 
    46   
    47  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    48  """ 
    49   
    50  ######################################################################## 
    51  # Imported modules 
    52  ######################################################################## 
    53   
    54  # System modules 
    55  import logging 
    56   
    57   
    58  ######################################################################## 
    59  # Module-wide constants and variables 
    60  ######################################################################## 
    61   
    62  logger = logging.getLogger("CedarBackup2.log.customize") 
    63   
    64  PLATFORM = "standard" 
    65  #PLATFORM = "debian" 
    66   
    67  DEBIAN_CDRECORD = "/usr/bin/wodim" 
    68  DEBIAN_MKISOFS = "/usr/bin/genisoimage" 
    69   
    70   
    71  ####################################################################### 
    72  # Public functions 
    73  ####################################################################### 
    74   
    75  ################################ 
    76  # customizeOverrides() function 
    77  ################################ 
    78   
    
    79 -def customizeOverrides(config, platform=PLATFORM):
    80 """ 81 Modify command overrides based on the configured platform. 82 83 On some platforms, we want to add command overrides to configuration. Each 84 override will only be added if the configuration does not already contain an 85 override with the same name. That way, the user still has a way to choose 86 their own version of the command if they want. 87 88 @param config: Configuration to modify 89 @param platform: Platform that is in use 90 """ 91 if platform == "debian": 92 logger.info("Overriding cdrecord for Debian platform: %s", DEBIAN_CDRECORD) 93 config.options.addOverride("cdrecord", DEBIAN_CDRECORD) 94 logger.info("Overriding mkisofs for Debian platform: %s", DEBIAN_MKISOFS) 95 config.options.addOverride("mkisofs", DEBIAN_MKISOFS)
    96

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.writers-module.html0000664000175000017500000000216212642035643027151 0ustar pronovicpronovic00000000000000 writers

    Module writers


    Variables


    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.ReferenceConfig-class.html0000664000175000017500000007357512642035644031020 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ReferenceConfig
    Package CedarBackup2 :: Module config :: Class ReferenceConfig
    [hide private]
    [frames] | no frames]

    Class ReferenceConfig

    source code

    object --+
             |
            ReferenceConfig
    

    Class representing a Cedar Backup reference configuration.

    The reference information is just used for saving off metadata about configuration and exists mostly for backwards-compatibility with Cedar Backup 1.x.

    Instance Methods [hide private]
     
    __init__(self, author=None, revision=None, description=None, generator=None)
    Constructor for the ReferenceConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAuthor(self, value)
    Property target used to set the author value.
    source code
     
    _getAuthor(self)
    Property target used to get the author value.
    source code
     
    _setRevision(self, value)
    Property target used to set the revision value.
    source code
     
    _getRevision(self)
    Property target used to get the revision value.
    source code
     
    _setDescription(self, value)
    Property target used to set the description value.
    source code
     
    _getDescription(self)
    Property target used to get the description value.
    source code
     
    _setGenerator(self, value)
    Property target used to set the generator value.
    source code
     
    _getGenerator(self)
    Property target used to get the generator value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      author
    Author of the configuration file.
      revision
    Revision of the configuration file.
      description
    Description of the configuration file.
      generator
    Tool that generated the configuration file.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, author=None, revision=None, description=None, generator=None)
    (Constructor)

    source code 

    Constructor for the ReferenceConfig class.

    Parameters:
    • author - Author of the configuration file.
    • revision - Revision of the configuration file.
    • description - Description of the configuration file.
    • generator - Tool that generated the configuration file.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAuthor(self, value)

    source code 

    Property target used to set the author value. No validations.

    _setRevision(self, value)

    source code 

    Property target used to set the revision value. No validations.

    _setDescription(self, value)

    source code 

    Property target used to set the description value. No validations.

    _setGenerator(self, value)

    source code 

    Property target used to set the generator value. No validations.


    Property Details [hide private]

    author

    Author of the configuration file.

    Get Method:
    _getAuthor(self) - Property target used to get the author value.
    Set Method:
    _setAuthor(self, value) - Property target used to set the author value.

    revision

    Revision of the configuration file.

    Get Method:
    _getRevision(self) - Property target used to get the revision value.
    Set Method:
    _setRevision(self, value) - Property target used to set the revision value.

    description

    Description of the configuration file.

    Get Method:
    _getDescription(self) - Property target used to get the description value.
    Set Method:
    _setDescription(self, value) - Property target used to set the description value.

    generator

    Tool that generated the configuration file.

    Get Method:
    _getGenerator(self) - Property target used to get the generator value.
    Set Method:
    _setGenerator(self, value) - Property target used to set the generator value.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend.sysinfo-module.html0000664000175000017500000000500312642035643030427 0ustar pronovicpronovic00000000000000 sysinfo

    Module sysinfo


    Functions

    executeAction

    Variables

    DPKG_COMMAND
    DPKG_PATH
    FDISK_COMMAND
    FDISK_PATH
    LS_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers-pysrc.html0000664000175000017500000002272512642035644026251 0ustar pronovicpronovic00000000000000 CedarBackup2.writers
    Package CedarBackup2 :: Package writers
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2.writers

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Official Cedar Backup Extensions 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Cedar Backup writers. 
    24   
    25  This package consolidates all of the modules that implenent "image writer" 
    26  functionality, including utilities and specific writer implementations. 
    27   
    28  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    29  """ 
    30   
    31   
    32  ######################################################################## 
    33  # Package initialization 
    34  ######################################################################## 
    35   
    36  # Using 'from CedarBackup2.writers import *' will just import the modules listed 
    37  # in the __all__ variable. 
    38   
    39  __all__ = [ 'util', 'cdwriter', 'dvdwriter', ] 
    40   
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.capacity.CapacityConfig-class.html0000664000175000017500000005636612642035644032474 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity.CapacityConfig
    Package CedarBackup2 :: Package extend :: Module capacity :: Class CapacityConfig
    [hide private]
    [frames] | no frames]

    Class CapacityConfig

    source code

    object --+
             |
            CapacityConfig
    

    Class representing capacity configuration.

    The following restrictions exist on data in this class:

    • The maximum percentage utilized must be a PercentageQuantity
    • The minimum bytes remaining must be a ByteQuantity
    Instance Methods [hide private]
     
    __init__(self, maxPercentage=None, minBytes=None)
    Constructor for the CapacityConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setMaxPercentage(self, value)
    Property target used to set the maxPercentage value.
    source code
     
    _getMaxPercentage(self)
    Property target used to get the maxPercentage value
    source code
     
    _setMinBytes(self, value)
    Property target used to set the bytes utilized value.
    source code
     
    _getMinBytes(self)
    Property target used to get the bytes remaining value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      maxPercentage
    Maximum percentage of the media that may be utilized.
      minBytes
    Minimum number of free bytes that must be available.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, maxPercentage=None, minBytes=None)
    (Constructor)

    source code 

    Constructor for the CapacityConfig class.

    Parameters:
    • maxPercentage - Maximum percentage of the media that may be utilized
    • minBytes - Minimum number of free bytes that must be available
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setMaxPercentage(self, value)

    source code 

    Property target used to set the maxPercentage value. If not None, the value must be a PercentageQuantity object.

    Raises:
    • ValueError - If the value is not a PercentageQuantity

    _setMinBytes(self, value)

    source code 

    Property target used to set the bytes utilized value. If not None, the value must be a ByteQuantity object.

    Raises:
    • ValueError - If the value is not a ByteQuantity

    Property Details [hide private]

    maxPercentage

    Maximum percentage of the media that may be utilized.

    Get Method:
    _getMaxPercentage(self) - Property target used to get the maxPercentage value
    Set Method:
    _setMaxPercentage(self, value) - Property target used to set the maxPercentage value.

    minBytes

    Minimum number of free bytes that must be available.

    Get Method:
    _getMinBytes(self) - Property target used to get the bytes remaining value.
    Set Method:
    _setMinBytes(self, value) - Property target used to set the bytes utilized value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.initialize-module.html0000664000175000017500000002304012642035643030465 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.initialize
    Package CedarBackup2 :: Package actions :: Module initialize
    [hide private]
    [frames] | no frames]

    Module initialize

    source code

    Implements the standard 'initialize' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeInitialize(configPath, options, config)
    Executes the initialize action.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.initialize")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeInitialize(configPath, options, config)

    source code 

    Executes the initialize action.

    The initialize action initializes the media currently in the writer device so that Cedar Backup can recognize it later. This is an optional step; it's only required if checkMedia is set on the store configuration.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mysql-pysrc.html0000664000175000017500000072056012642035645027210 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mysql
    Package CedarBackup2 :: Package extend :: Module mysql
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.mysql

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2005,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : Provides an extension to back up MySQL databases. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to back up MySQL databases. 
     40   
     41  This is a Cedar Backup extension used to back up MySQL databases via the Cedar 
     42  Backup command line.  It requires a new configuration section <mysql> and is 
     43  intended to be run either immediately before or immediately after the standard 
     44  collect action.  Aside from its own configuration, it requires the options and 
     45  collect configuration sections in the standard Cedar Backup configuration file. 
     46   
     47  The backup is done via the C{mysqldump} command included with the MySQL 
     48  product.  Output can be compressed using C{gzip} or C{bzip2}.  Administrators 
     49  can configure the extension either to back up all databases or to back up only 
     50  specific databases.  Note that this code always produces a full backup.  There 
     51  is currently no facility for making incremental backups.  If/when someone has a 
     52  need for this and can describe how to do it, I'll update this extension or 
     53  provide another. 
     54   
     55  The extension assumes that all configured databases can be backed up by a 
     56  single user.  Often, the "root" database user will be used.  An alternative is 
     57  to create a separate MySQL "backup" user and grant that user rights to read 
     58  (but not write) various databases as needed.  This second option is probably 
     59  the best choice. 
     60   
     61  The extension accepts a username and password in configuration.  However, you 
     62  probably do not want to provide those values in Cedar Backup configuration. 
     63  This is because Cedar Backup will provide these values to C{mysqldump} via the 
     64  command-line C{--user} and C{--password} switches, which will be visible to 
     65  other users in the process listing. 
     66   
     67  Instead, you should configure the username and password in one of MySQL's 
     68  configuration files.  Typically, that would be done by putting a stanza like 
     69  this in C{/root/.my.cnf}:: 
     70   
     71     [mysqldump] 
     72     user     = root 
     73     password = <secret> 
     74   
     75  Regardless of whether you are using C{~/.my.cnf} or C{/etc/cback.conf} to store 
     76  database login and password information, you should be careful about who is 
     77  allowed to view that information.  Typically, this means locking down 
     78  permissions so that only the file owner can read the file contents (i.e. use 
     79  mode C{0600}). 
     80   
     81  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     82  """ 
     83   
     84  ######################################################################## 
     85  # Imported modules 
     86  ######################################################################## 
     87   
     88  # System modules 
     89  import os 
     90  import logging 
     91  from gzip import GzipFile 
     92  from bz2 import BZ2File 
     93   
     94  # Cedar Backup modules 
     95  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode 
     96  from CedarBackup2.xmlutil import readFirstChild, readString, readStringList, readBoolean 
     97  from CedarBackup2.config import VALID_COMPRESS_MODES 
     98  from CedarBackup2.util import resolveCommand, executeCommand 
     99  from CedarBackup2.util import ObjectTypeList, changeOwnership 
    100   
    101   
    102  ######################################################################## 
    103  # Module-wide constants and variables 
    104  ######################################################################## 
    105   
    106  logger = logging.getLogger("CedarBackup2.log.extend.mysql") 
    107  MYSQLDUMP_COMMAND = [ "mysqldump", ] 
    
    108 109 110 ######################################################################## 111 # MysqlConfig class definition 112 ######################################################################## 113 114 -class MysqlConfig(object):
    115 116 """ 117 Class representing MySQL configuration. 118 119 The MySQL configuration information is used for backing up MySQL databases. 120 121 The following restrictions exist on data in this class: 122 123 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 124 - The 'all' flag must be 'Y' if no databases are defined. 125 - The 'all' flag must be 'N' if any databases are defined. 126 - Any values in the databases list must be strings. 127 128 @sort: __init__, __repr__, __str__, __cmp__, user, password, all, databases 129 """ 130
    131 - def __init__(self, user=None, password=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622
    132 """ 133 Constructor for the C{MysqlConfig} class. 134 135 @param user: User to execute backup as. 136 @param password: Password associated with user. 137 @param compressMode: Compress mode for backed-up files. 138 @param all: Indicates whether to back up all databases. 139 @param databases: List of databases to back up. 140 """ 141 self._user = None 142 self._password = None 143 self._compressMode = None 144 self._all = None 145 self._databases = None 146 self.user = user 147 self.password = password 148 self.compressMode = compressMode 149 self.all = all 150 self.databases = databases
    151
    152 - def __repr__(self):
    153 """ 154 Official string representation for class instance. 155 """ 156 return "MysqlConfig(%s, %s, %s, %s)" % (self.user, self.password, self.all, self.databases)
    157
    158 - def __str__(self):
    159 """ 160 Informal string representation for class instance. 161 """ 162 return self.__repr__()
    163
    164 - def __cmp__(self, other):
    165 """ 166 Definition of equals operator for this class. 167 @param other: Other object to compare to. 168 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 169 """ 170 if other is None: 171 return 1 172 if self.user != other.user: 173 if self.user < other.user: 174 return -1 175 else: 176 return 1 177 if self.password != other.password: 178 if self.password < other.password: 179 return -1 180 else: 181 return 1 182 if self.compressMode != other.compressMode: 183 if self.compressMode < other.compressMode: 184 return -1 185 else: 186 return 1 187 if self.all != other.all: 188 if self.all < other.all: 189 return -1 190 else: 191 return 1 192 if self.databases != other.databases: 193 if self.databases < other.databases: 194 return -1 195 else: 196 return 1 197 return 0
    198
    199 - def _setUser(self, value):
    200 """ 201 Property target used to set the user value. 202 """ 203 if value is not None: 204 if len(value) < 1: 205 raise ValueError("User must be non-empty string.") 206 self._user = value
    207
    208 - def _getUser(self):
    209 """ 210 Property target used to get the user value. 211 """ 212 return self._user
    213
    214 - def _setPassword(self, value):
    215 """ 216 Property target used to set the password value. 217 """ 218 if value is not None: 219 if len(value) < 1: 220 raise ValueError("Password must be non-empty string.") 221 self._password = value
    222
    223 - def _getPassword(self):
    224 """ 225 Property target used to get the password value. 226 """ 227 return self._password
    228
    229 - def _setCompressMode(self, value):
    230 """ 231 Property target used to set the compress mode. 232 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 233 @raise ValueError: If the value is not valid. 234 """ 235 if value is not None: 236 if value not in VALID_COMPRESS_MODES: 237 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 238 self._compressMode = value
    239
    240 - def _getCompressMode(self):
    241 """ 242 Property target used to get the compress mode. 243 """ 244 return self._compressMode
    245
    246 - def _setAll(self, value):
    247 """ 248 Property target used to set the 'all' flag. 249 No validations, but we normalize the value to C{True} or C{False}. 250 """ 251 if value: 252 self._all = True 253 else: 254 self._all = False
    255
    256 - def _getAll(self):
    257 """ 258 Property target used to get the 'all' flag. 259 """ 260 return self._all
    261
    262 - def _setDatabases(self, value):
    263 """ 264 Property target used to set the databases list. 265 Either the value must be C{None} or each element must be a string. 266 @raise ValueError: If the value is not a string. 267 """ 268 if value is None: 269 self._databases = None 270 else: 271 for database in value: 272 if len(database) < 1: 273 raise ValueError("Each database must be a non-empty string.") 274 try: 275 saved = self._databases 276 self._databases = ObjectTypeList(basestring, "string") 277 self._databases.extend(value) 278 except Exception, e: 279 self._databases = saved 280 raise e
    281
    282 - def _getDatabases(self):
    283 """ 284 Property target used to get the databases list. 285 """ 286 return self._databases
    287 288 user = property(_getUser, _setUser, None, "User to execute backup as.") 289 password = property(_getPassword, _setPassword, None, "Password associated with user.") 290 compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") 291 all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") 292 databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") 293
    294 295 ######################################################################## 296 # LocalConfig class definition 297 ######################################################################## 298 299 -class LocalConfig(object):
    300 301 """ 302 Class representing this extension's configuration document. 303 304 This is not a general-purpose configuration object like the main Cedar 305 Backup configuration object. Instead, it just knows how to parse and emit 306 MySQL-specific configuration values. Third parties who need to read and 307 write configuration related to this extension should access it through the 308 constructor, C{validate} and C{addConfig} methods. 309 310 @note: Lists within this class are "unordered" for equality comparisons. 311 312 @sort: __init__, __repr__, __str__, __cmp__, mysql, validate, addConfig 313 """ 314
    315 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    316 """ 317 Initializes a configuration object. 318 319 If you initialize the object without passing either C{xmlData} or 320 C{xmlPath} then configuration will be empty and will be invalid until it 321 is filled in properly. 322 323 No reference to the original XML data or original path is saved off by 324 this class. Once the data has been parsed (successfully or not) this 325 original information is discarded. 326 327 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 328 method will be called (with its default arguments) against configuration 329 after successfully parsing any passed-in XML. Keep in mind that even if 330 C{validate} is C{False}, it might not be possible to parse the passed-in 331 XML document if lower-level validations fail. 332 333 @note: It is strongly suggested that the C{validate} option always be set 334 to C{True} (the default) unless there is a specific need to read in 335 invalid configuration from disk. 336 337 @param xmlData: XML data representing configuration. 338 @type xmlData: String data. 339 340 @param xmlPath: Path to an XML file on disk. 341 @type xmlPath: Absolute path to a file on disk. 342 343 @param validate: Validate the document after parsing it. 344 @type validate: Boolean true/false. 345 346 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 347 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 348 @raise ValueError: If the parsed configuration document is not valid. 349 """ 350 self._mysql = None 351 self.mysql = None 352 if xmlData is not None and xmlPath is not None: 353 raise ValueError("Use either xmlData or xmlPath, but not both.") 354 if xmlData is not None: 355 self._parseXmlData(xmlData) 356 if validate: 357 self.validate() 358 elif xmlPath is not None: 359 xmlData = open(xmlPath).read() 360 self._parseXmlData(xmlData) 361 if validate: 362 self.validate()
    363
    364 - def __repr__(self):
    365 """ 366 Official string representation for class instance. 367 """ 368 return "LocalConfig(%s)" % (self.mysql)
    369
    370 - def __str__(self):
    371 """ 372 Informal string representation for class instance. 373 """ 374 return self.__repr__()
    375
    376 - def __cmp__(self, other):
    377 """ 378 Definition of equals operator for this class. 379 Lists within this class are "unordered" for equality comparisons. 380 @param other: Other object to compare to. 381 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 382 """ 383 if other is None: 384 return 1 385 if self.mysql != other.mysql: 386 if self.mysql < other.mysql: 387 return -1 388 else: 389 return 1 390 return 0
    391
    392 - def _setMysql(self, value):
    393 """ 394 Property target used to set the mysql configuration value. 395 If not C{None}, the value must be a C{MysqlConfig} object. 396 @raise ValueError: If the value is not a C{MysqlConfig} 397 """ 398 if value is None: 399 self._mysql = None 400 else: 401 if not isinstance(value, MysqlConfig): 402 raise ValueError("Value must be a C{MysqlConfig} object.") 403 self._mysql = value
    404
    405 - def _getMysql(self):
    406 """ 407 Property target used to get the mysql configuration value. 408 """ 409 return self._mysql
    410 411 mysql = property(_getMysql, _setMysql, None, "Mysql configuration in terms of a C{MysqlConfig} object.") 412
    413 - def validate(self):
    414 """ 415 Validates configuration represented by the object. 416 417 The compress mode must be filled in. Then, if the 'all' flag I{is} set, 418 no databases are allowed, and if the 'all' flag is I{not} set, at least 419 one database is required. 420 421 @raise ValueError: If one of the validations fails. 422 """ 423 if self.mysql is None: 424 raise ValueError("Mysql section is required.") 425 if self.mysql.compressMode is None: 426 raise ValueError("Compress mode value is required.") 427 if self.mysql.all: 428 if self.mysql.databases is not None and self.mysql.databases != []: 429 raise ValueError("Databases cannot be specified if 'all' flag is set.") 430 else: 431 if self.mysql.databases is None or len(self.mysql.databases) < 1: 432 raise ValueError("At least one MySQL database must be indicated if 'all' flag is not set.")
    433
    434 - def addConfig(self, xmlDom, parentNode):
    435 """ 436 Adds a <mysql> configuration section as the next child of a parent. 437 438 Third parties should use this function to write configuration related to 439 this extension. 440 441 We add the following fields to the document:: 442 443 user //cb_config/mysql/user 444 password //cb_config/mysql/password 445 compressMode //cb_config/mysql/compress_mode 446 all //cb_config/mysql/all 447 448 We also add groups of the following items, one list element per 449 item:: 450 451 database //cb_config/mysql/database 452 453 @param xmlDom: DOM tree as from C{impl.createDocument()}. 454 @param parentNode: Parent that the section should be appended to. 455 """ 456 if self.mysql is not None: 457 sectionNode = addContainerNode(xmlDom, parentNode, "mysql") 458 addStringNode(xmlDom, sectionNode, "user", self.mysql.user) 459 addStringNode(xmlDom, sectionNode, "password", self.mysql.password) 460 addStringNode(xmlDom, sectionNode, "compress_mode", self.mysql.compressMode) 461 addBooleanNode(xmlDom, sectionNode, "all", self.mysql.all) 462 if self.mysql.databases is not None: 463 for database in self.mysql.databases: 464 addStringNode(xmlDom, sectionNode, "database", database)
    465
    466 - def _parseXmlData(self, xmlData):
    467 """ 468 Internal method to parse an XML string into the object. 469 470 This method parses the XML document into a DOM tree (C{xmlDom}) and then 471 calls a static method to parse the mysql configuration section. 472 473 @param xmlData: XML data to be parsed 474 @type xmlData: String data 475 476 @raise ValueError: If the XML cannot be successfully parsed. 477 """ 478 (xmlDom, parentNode) = createInputDom(xmlData) 479 self._mysql = LocalConfig._parseMysql(parentNode)
    480 481 @staticmethod
    482 - def _parseMysql(parentNode):
    483 """ 484 Parses a mysql configuration section. 485 486 We read the following fields:: 487 488 user //cb_config/mysql/user 489 password //cb_config/mysql/password 490 compressMode //cb_config/mysql/compress_mode 491 all //cb_config/mysql/all 492 493 We also read groups of the following item, one list element per 494 item:: 495 496 databases //cb_config/mysql/database 497 498 @param parentNode: Parent node to search beneath. 499 500 @return: C{MysqlConfig} object or C{None} if the section does not exist. 501 @raise ValueError: If some filled-in value is invalid. 502 """ 503 mysql = None 504 section = readFirstChild(parentNode, "mysql") 505 if section is not None: 506 mysql = MysqlConfig() 507 mysql.user = readString(section, "user") 508 mysql.password = readString(section, "password") 509 mysql.compressMode = readString(section, "compress_mode") 510 mysql.all = readBoolean(section, "all") 511 mysql.databases = readStringList(section, "database") 512 return mysql
    513
    514 515 ######################################################################## 516 # Public functions 517 ######################################################################## 518 519 ########################### 520 # executeAction() function 521 ########################### 522 523 -def executeAction(configPath, options, config):
    524 """ 525 Executes the MySQL backup action. 526 527 @param configPath: Path to configuration file on disk. 528 @type configPath: String representing a path on disk. 529 530 @param options: Program command-line options. 531 @type options: Options object. 532 533 @param config: Program configuration. 534 @type config: Config object. 535 536 @raise ValueError: Under many generic error conditions 537 @raise IOError: If a backup could not be written for some reason. 538 """ 539 logger.debug("Executing MySQL extended action.") 540 if config.options is None or config.collect is None: 541 raise ValueError("Cedar Backup configuration is not properly filled in.") 542 local = LocalConfig(xmlPath=configPath) 543 if local.mysql.all: 544 logger.info("Backing up all databases.") 545 _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, 546 config.options.backupUser, config.options.backupGroup, None) 547 else: 548 logger.debug("Backing up %d individual databases.", len(local.mysql.databases)) 549 for database in local.mysql.databases: 550 logger.info("Backing up database [%s].", database) 551 _backupDatabase(config.collect.targetDir, local.mysql.compressMode, local.mysql.user, local.mysql.password, 552 config.options.backupUser, config.options.backupGroup, database) 553 logger.info("Executed the MySQL extended action successfully.")
    554
    555 -def _backupDatabase(targetDir, compressMode, user, password, backupUser, backupGroup, database=None):
    556 """ 557 Backs up an individual MySQL database, or all databases. 558 559 This internal method wraps the public method and adds some functionality, 560 like figuring out a filename, etc. 561 562 @param targetDir: Directory into which backups should be written. 563 @param compressMode: Compress mode to be used for backed-up files. 564 @param user: User to use for connecting to the database (if any). 565 @param password: Password associated with user (if any). 566 @param backupUser: User to own resulting file. 567 @param backupGroup: Group to own resulting file. 568 @param database: Name of database, or C{None} for all databases. 569 570 @return: Name of the generated backup file. 571 572 @raise ValueError: If some value is missing or invalid. 573 @raise IOError: If there is a problem executing the MySQL dump. 574 """ 575 (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) 576 try: 577 backupDatabase(user, password, outputFile, database) 578 finally: 579 outputFile.close() 580 if not os.path.exists(filename): 581 raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) 582 changeOwnership(filename, backupUser, backupGroup)
    583
    584 # pylint: disable=R0204 585 -def _getOutputFile(targetDir, database, compressMode):
    586 """ 587 Opens the output file used for saving the MySQL dump. 588 589 The filename is either C{"mysqldump.txt"} or C{"mysqldump-<database>.txt"}. The 590 C{".bz2"} extension is added if C{compress} is C{True}. 591 592 @param targetDir: Target directory to write file in. 593 @param database: Name of the database (if any) 594 @param compressMode: Compress mode to be used for backed-up files. 595 596 @return: Tuple of (Output file object, filename) 597 """ 598 if database is None: 599 filename = os.path.join(targetDir, "mysqldump.txt") 600 else: 601 filename = os.path.join(targetDir, "mysqldump-%s.txt" % database) 602 if compressMode == "gzip": 603 filename = "%s.gz" % filename 604 outputFile = GzipFile(filename, "w") 605 elif compressMode == "bzip2": 606 filename = "%s.bz2" % filename 607 outputFile = BZ2File(filename, "w") 608 else: 609 outputFile = open(filename, "w") 610 logger.debug("MySQL dump file will be [%s].", filename) 611 return (outputFile, filename)
    612
    613 614 ############################ 615 # backupDatabase() function 616 ############################ 617 618 -def backupDatabase(user, password, backupFile, database=None):
    619 """ 620 Backs up an individual MySQL database, or all databases. 621 622 This function backs up either a named local MySQL database or all local 623 MySQL databases, using the passed-in user and password (if provided) for 624 connectivity. This function call I{always} results a full backup. There is 625 no facility for incremental backups. 626 627 The backup data will be written into the passed-in backup file. Normally, 628 this would be an object as returned from C{open()}, but it is possible to 629 use something like a C{GzipFile} to write compressed output. The caller is 630 responsible for closing the passed-in backup file. 631 632 Often, the "root" database user will be used when backing up all databases. 633 An alternative is to create a separate MySQL "backup" user and grant that 634 user rights to read (but not write) all of the databases that will be backed 635 up. 636 637 This function accepts a username and password. However, you probably do not 638 want to pass those values in. This is because they will be provided to 639 C{mysqldump} via the command-line C{--user} and C{--password} switches, 640 which will be visible to other users in the process listing. 641 642 Instead, you should configure the username and password in one of MySQL's 643 configuration files. Typically, this would be done by putting a stanza like 644 this in C{/root/.my.cnf}, to provide C{mysqldump} with the root database 645 username and its password:: 646 647 [mysqldump] 648 user = root 649 password = <secret> 650 651 If you are executing this function as some system user other than root, then 652 the C{.my.cnf} file would be placed in the home directory of that user. In 653 either case, make sure to set restrictive permissions (typically, mode 654 C{0600}) on C{.my.cnf} to make sure that other users cannot read the file. 655 656 @param user: User to use for connecting to the database (if any) 657 @type user: String representing MySQL username, or C{None} 658 659 @param password: Password associated with user (if any) 660 @type password: String representing MySQL password, or C{None} 661 662 @param backupFile: File use for writing backup. 663 @type backupFile: Python file object as from C{open()} or C{file()}. 664 665 @param database: Name of the database to be backed up. 666 @type database: String representing database name, or C{None} for all databases. 667 668 @raise ValueError: If some value is missing or invalid. 669 @raise IOError: If there is a problem executing the MySQL dump. 670 """ 671 args = [ "-all", "--flush-logs", "--opt", ] 672 if user is not None: 673 logger.warn("Warning: MySQL username will be visible in process listing (consider using ~/.my.cnf).") 674 args.append("--user=%s" % user) 675 if password is not None: 676 logger.warn("Warning: MySQL password will be visible in process listing (consider using ~/.my.cnf).") 677 args.append("--password=%s" % password) 678 if database is None: 679 args.insert(0, "--all-databases") 680 else: 681 args.insert(0, "--databases") 682 args.append(database) 683 command = resolveCommand(MYSQLDUMP_COMMAND) 684 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] 685 if result != 0: 686 if database is None: 687 raise IOError("Error [%d] executing MySQL database dump for all databases." % result) 688 else: 689 raise IOError("Error [%d] executing MySQL database dump for database [%s]." % (result, database))
    690

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.cdwriter._ImageProperties-class.html0000664000175000017500000002171212642035644033270 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter._ImageProperties
    Package CedarBackup2 :: Package writers :: Module cdwriter :: Class _ImageProperties
    [hide private]
    [frames] | no frames]

    Class _ImageProperties

    source code

    object --+
             |
            _ImageProperties
    

    Simple value object to hold image properties for DvdWriter.

    Instance Methods [hide private]
     
    __init__(self)
    x.__init__(...) initializes x; see help(type(x)) for signature
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    x.__init__(...) initializes x; see help(type(x)) for signature

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions.util-module.html0000664000175000017500000000466712642035643030102 0ustar pronovicpronovic00000000000000 util

    Module util


    Functions

    buildMediaLabel
    checkMediaState
    createWriter
    findDailyDirs
    getBackupFiles
    initializeMediaState
    writeIndicatorFile

    Variables

    MEDIA_LABEL_PREFIX
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.cli.Options-class.html0000664000175000017500000026030612642035643026716 0ustar pronovicpronovic00000000000000 CedarBackup2.cli.Options
    Package CedarBackup2 :: Module cli :: Class Options
    [hide private]
    [frames] | no frames]

    Class Options

    source code

    object --+
             |
            Options
    
    Known Subclasses:

    Class representing command-line options for the cback script.

    The Options class is a Python object representation of the command-line options of the cback script.

    The object representation is two-way: a command line string or a list of command line arguments can be used to create an Options object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An Options object can even be created from scratch programmatically (if you have a need for that).

    There are two main levels of validation in the Options class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's property functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a ValueError exception when making assignments to fields if you are programmatically filling an object.

    The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc.

    All of these post-completion validations are encapsulated in the Options.validate method. This method can be called at any time by a client, and will always be called immediately after creating a Options object from a command line and before exporting a Options object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, argumentList=None, argumentString=None, validate=True)
    Initializes an options object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setHelp(self, value)
    Property target used to set the help flag.
    source code
     
    _getHelp(self)
    Property target used to get the help flag.
    source code
     
    _setVersion(self, value)
    Property target used to set the version flag.
    source code
     
    _getVersion(self)
    Property target used to get the version flag.
    source code
     
    _setVerbose(self, value)
    Property target used to set the verbose flag.
    source code
     
    _getVerbose(self)
    Property target used to get the verbose flag.
    source code
     
    _setQuiet(self, value)
    Property target used to set the quiet flag.
    source code
     
    _getQuiet(self)
    Property target used to get the quiet flag.
    source code
     
    _setConfig(self, value)
    Property target used to set the config parameter.
    source code
     
    _getConfig(self)
    Property target used to get the config parameter.
    source code
     
    _setFull(self, value)
    Property target used to set the full flag.
    source code
     
    _getFull(self)
    Property target used to get the full flag.
    source code
     
    _setManaged(self, value)
    Property target used to set the managed flag.
    source code
     
    _getManaged(self)
    Property target used to get the managed flag.
    source code
     
    _setManagedOnly(self, value)
    Property target used to set the managedOnly flag.
    source code
     
    _getManagedOnly(self)
    Property target used to get the managedOnly flag.
    source code
     
    _setLogfile(self, value)
    Property target used to set the logfile parameter.
    source code
     
    _getLogfile(self)
    Property target used to get the logfile parameter.
    source code
     
    _setOwner(self, value)
    Property target used to set the owner parameter.
    source code
     
    _getOwner(self)
    Property target used to get the owner parameter.
    source code
     
    _setMode(self, value)
    Property target used to set the mode parameter.
    source code
     
    _getMode(self)
    Property target used to get the mode parameter.
    source code
     
    _setOutput(self, value)
    Property target used to set the output flag.
    source code
     
    _getOutput(self)
    Property target used to get the output flag.
    source code
     
    _setDebug(self, value)
    Property target used to set the debug flag.
    source code
     
    _getDebug(self)
    Property target used to get the debug flag.
    source code
     
    _setStacktrace(self, value)
    Property target used to set the stacktrace flag.
    source code
     
    _getStacktrace(self)
    Property target used to get the stacktrace flag.
    source code
     
    _setDiagnostics(self, value)
    Property target used to set the diagnostics flag.
    source code
     
    _getDiagnostics(self)
    Property target used to get the diagnostics flag.
    source code
     
    _setActions(self, value)
    Property target used to set the actions list.
    source code
     
    _getActions(self)
    Property target used to get the actions list.
    source code
     
    validate(self)
    Validates command-line options represented by the object.
    source code
     
    buildArgumentList(self, validate=True)
    Extracts options into a list of command line arguments.
    source code
     
    buildArgumentString(self, validate=True)
    Extracts options into a string of command-line arguments.
    source code
     
    _parseArgumentList(self, argumentList)
    Internal method to parse a list of command-line arguments.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      help
    Command-line help (-h,--help) flag.
      version
    Command-line version (-V,--version) flag.
      verbose
    Command-line verbose (-b,--verbose) flag.
      quiet
    Command-line quiet (-q,--quiet) flag.
      config
    Command-line configuration file (-c,--config) parameter.
      full
    Command-line full-backup (-f,--full) flag.
      managed
    Command-line managed (-M,--managed) flag.
      managedOnly
    Command-line managed-only (-N,--managed-only) flag.
      logfile
    Command-line logfile (-l,--logfile) parameter.
      owner
    Command-line owner (-o,--owner) parameter, as tuple (user,group).
      mode
    Command-line mode (-m,--mode) parameter.
      output
    Command-line output (-O,--output) flag.
      debug
    Command-line debug (-d,--debug) flag.
      stacktrace
    Command-line stacktrace (-s,--stack) flag.
      diagnostics
    Command-line diagnostics (-D,--diagnostics) flag.
      actions
    Command-line actions list.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, argumentList=None, argumentString=None, validate=True)
    (Constructor)

    source code 

    Initializes an options object.

    If you initialize the object without passing either argumentList or argumentString, the object will be empty and will be invalid until it is filled in properly.

    No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    The argument list is assumed to be a list of arguments, not including the name of the command, something like sys.argv[1:]. If you pass sys.argv instead, things are not going to work.

    The argument string will be parsed into an argument list by the util.splitCommandLine function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to sys.argv[1:], just like argumentList.

    Unless the validate argument is False, the Options.validate method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if validate is False, it might not be possible to parse the passed-in command line, so an exception might still be raised.

    Parameters:
    • argumentList (List of arguments, i.e. sys.argv) - Command line for a program.
    • argumentString (String, i.e. "cback --verbose stage store") - Command line for a program.
    • validate (Boolean true/false.) - Validate the command line after parsing it.
    Raises:
    • getopt.GetoptError - If the command-line arguments could not be parsed.
    • ValueError - If the command-line arguments are invalid.
    Overrides: object.__init__
    Notes:
    • The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback script.
    • It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid command line arguments.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setHelp(self, value)

    source code 

    Property target used to set the help flag. No validations, but we normalize the value to True or False.

    _setVersion(self, value)

    source code 

    Property target used to set the version flag. No validations, but we normalize the value to True or False.

    _setVerbose(self, value)

    source code 

    Property target used to set the verbose flag. No validations, but we normalize the value to True or False.

    _setQuiet(self, value)

    source code 

    Property target used to set the quiet flag. No validations, but we normalize the value to True or False.

    _setFull(self, value)

    source code 

    Property target used to set the full flag. No validations, but we normalize the value to True or False.

    _setManaged(self, value)

    source code 

    Property target used to set the managed flag. No validations, but we normalize the value to True or False.

    _setManagedOnly(self, value)

    source code 

    Property target used to set the managedOnly flag. No validations, but we normalize the value to True or False.

    _setLogfile(self, value)

    source code 

    Property target used to set the logfile parameter.

    Raises:
    • ValueError - If the value cannot be encoded properly.

    _setOwner(self, value)

    source code 

    Property target used to set the owner parameter. If not None, the owner must be a (user,group) tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple.

    Raises:
    • ValueError - If the value is not valid.

    _getOwner(self)

    source code 

    Property target used to get the owner parameter. The parameter is a tuple of (user, group).

    _setOutput(self, value)

    source code 

    Property target used to set the output flag. No validations, but we normalize the value to True or False.

    _setDebug(self, value)

    source code 

    Property target used to set the debug flag. No validations, but we normalize the value to True or False.

    _setStacktrace(self, value)

    source code 

    Property target used to set the stacktrace flag. No validations, but we normalize the value to True or False.

    _setDiagnostics(self, value)

    source code 

    Property target used to set the diagnostics flag. No validations, but we normalize the value to True or False.

    _setActions(self, value)

    source code 

    Property target used to set the actions list. We don't restrict the contents of actions. They're validated somewhere else.

    Raises:
    • ValueError - If the value is not valid.

    validate(self)

    source code 

    Validates command-line options represented by the object.

    Unless --help or --version are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality.

    Raises:
    • ValueError - If one of the validations fails.

    Note: The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback script.

    buildArgumentList(self, validate=True)

    source code 

    Extracts options into a list of command line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the argumentList parameter. Unlike buildArgumentString, string arguments are not quoted here, because there is no need for it.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    List representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    buildArgumentString(self, validate=True)

    source code 

    Extracts options into a string of command-line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes ("). The resulting string will be suitable for passing back to the constructor in the argumentString parameter.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    String representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    _parseArgumentList(self, argumentList)

    source code 

    Internal method to parse a list of command-line arguments.

    Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the validate method).

    For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. -l and a --logfile) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used.

    Parameters:
    • argumentList (List of arguments to a command, i.e. sys.argv[1:]) - List of arguments to a command.
    Raises:
    • ValueError - If the argument list cannot be successfully parsed.

    Property Details [hide private]

    help

    Command-line help (-h,--help) flag.

    Get Method:
    _getHelp(self) - Property target used to get the help flag.
    Set Method:
    _setHelp(self, value) - Property target used to set the help flag.

    version

    Command-line version (-V,--version) flag.

    Get Method:
    _getVersion(self) - Property target used to get the version flag.
    Set Method:
    _setVersion(self, value) - Property target used to set the version flag.

    verbose

    Command-line verbose (-b,--verbose) flag.

    Get Method:
    _getVerbose(self) - Property target used to get the verbose flag.
    Set Method:
    _setVerbose(self, value) - Property target used to set the verbose flag.

    quiet

    Command-line quiet (-q,--quiet) flag.

    Get Method:
    _getQuiet(self) - Property target used to get the quiet flag.
    Set Method:
    _setQuiet(self, value) - Property target used to set the quiet flag.

    config

    Command-line configuration file (-c,--config) parameter.

    Get Method:
    _getConfig(self) - Property target used to get the config parameter.
    Set Method:
    _setConfig(self, value) - Property target used to set the config parameter.

    full

    Command-line full-backup (-f,--full) flag.

    Get Method:
    _getFull(self) - Property target used to get the full flag.
    Set Method:
    _setFull(self, value) - Property target used to set the full flag.

    managed

    Command-line managed (-M,--managed) flag.

    Get Method:
    _getManaged(self) - Property target used to get the managed flag.
    Set Method:
    _setManaged(self, value) - Property target used to set the managed flag.

    managedOnly

    Command-line managed-only (-N,--managed-only) flag.

    Get Method:
    _getManagedOnly(self) - Property target used to get the managedOnly flag.
    Set Method:
    _setManagedOnly(self, value) - Property target used to set the managedOnly flag.

    logfile

    Command-line logfile (-l,--logfile) parameter.

    Get Method:
    _getLogfile(self) - Property target used to get the logfile parameter.
    Set Method:
    _setLogfile(self, value) - Property target used to set the logfile parameter.

    owner

    Command-line owner (-o,--owner) parameter, as tuple (user,group).

    Get Method:
    _getOwner(self) - Property target used to get the owner parameter.
    Set Method:
    _setOwner(self, value) - Property target used to set the owner parameter.

    mode

    Command-line mode (-m,--mode) parameter.

    Get Method:
    _getMode(self) - Property target used to get the mode parameter.
    Set Method:
    _setMode(self, value) - Property target used to set the mode parameter.

    output

    Command-line output (-O,--output) flag.

    Get Method:
    _getOutput(self) - Property target used to get the output flag.
    Set Method:
    _setOutput(self, value) - Property target used to set the output flag.

    debug

    Command-line debug (-d,--debug) flag.

    Get Method:
    _getDebug(self) - Property target used to get the debug flag.
    Set Method:
    _setDebug(self, value) - Property target used to set the debug flag.

    stacktrace

    Command-line stacktrace (-s,--stack) flag.

    Get Method:
    _getStacktrace(self) - Property target used to get the stacktrace flag.
    Set Method:
    _setStacktrace(self, value) - Property target used to set the stacktrace flag.

    diagnostics

    Command-line diagnostics (-D,--diagnostics) flag.

    Get Method:
    _getDiagnostics(self) - Property target used to get the diagnostics flag.
    Set Method:
    _setDiagnostics(self, value) - Property target used to set the diagnostics flag.

    actions

    Command-line actions list.

    Get Method:
    _getActions(self) - Property target used to get the actions list.
    Set Method:
    _setActions(self, value) - Property target used to set the actions list.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.tools.amazons3-module.html0000664000175000017500000007636612642035643027602 0ustar pronovicpronovic00000000000000 CedarBackup2.tools.amazons3
    Package CedarBackup2 :: Package tools :: Module amazons3
    [hide private]
    [frames] | no frames]

    Module amazons3

    source code

    Synchonizes a local directory with an Amazon S3 bucket.

    No configuration is required; all necessary information is taken from the command-line. The only thing configuration would help with is the path resolver interface, and it doesn't seem worth it to require configuration just to get that.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      Options
    Class representing command-line options for the cback-amazons3-sync script.
    Functions [hide private]
     
    cli()
    Implements the command-line interface for the cback-amazons3-sync script.
    source code
     
    _usage(fd=sys.stdout)
    Prints usage information for the cback-amazons3-sync script.
    source code
     
    _version(fd=sys.stdout)
    Prints version information for the cback script.
    source code
     
    _diagnostics(fd=sys.stdout)
    Prints runtime diagnostics information.
    source code
     
    _executeAction(options)
    Implements the guts of the cback-amazons3-sync tool.
    source code
     
    _buildSourceFiles(sourceDir)
    Build a list of files in a source directory
    source code
     
    _checkSourceFiles(sourceDir, sourceFiles)
    Check source files, trying to guess which ones will have encoding problems.
    source code
     
    _synchronizeBucket(sourceDir, s3BucketUrl)
    Synchronize a local directory to an Amazon S3 bucket.
    source code
     
    _verifyBucketContents(sourceDir, sourceFiles, s3BucketUrl)
    Verify that a source directory is equivalent to an Amazon S3 bucket.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.tools.amazons3")
      AWS_COMMAND = ['aws']
      SHORT_SWITCHES = 'hVbql:o:m:OdsDvw'
      LONG_SWITCHES = ['help', 'version', 'verbose', 'quiet', 'logfi...
      __package__ = 'CedarBackup2.tools'
    Function Details [hide private]

    cli()

    source code 

    Implements the command-line interface for the cback-amazons3-sync script.

    Essentially, this is the "main routine" for the cback-amazons3-sync script. It does all of the argument processing for the script, and then also implements the tool functionality.

    This function looks pretty similiar to CedarBackup2.cli.cli(). It's not easy to refactor this code to make it reusable and also readable, so I've decided to just live with the duplication.

    A different error code is returned for each type of failure:

    • 1: The Python interpreter version is < 2.7
    • 2: Error processing command-line arguments
    • 3: Error configuring logging
    • 5: Backup was interrupted with a CTRL-C or similar
    • 6: Error executing other parts of the script
    Returns:
    Error code as described above.

    Note: This script uses print rather than logging to the INFO level, because it is interactive. Underlying Cedar Backup functionality uses the logging mechanism exclusively.

    _usage(fd=sys.stdout)

    source code 

    Prints usage information for the cback-amazons3-sync script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _version(fd=sys.stdout)

    source code 

    Prints version information for the cback script.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _diagnostics(fd=sys.stdout)

    source code 

    Prints runtime diagnostics information.

    Parameters:
    • fd - File descriptor used to print information.

    Note: The fd is used rather than print to facilitate unit testing.

    _executeAction(options)

    source code 

    Implements the guts of the cback-amazons3-sync tool.

    Parameters:
    • options (Options object.) - Program command-line options.
    Raises:
    • Exception - Under many generic error conditions

    _buildSourceFiles(sourceDir)

    source code 

    Build a list of files in a source directory

    Parameters:
    • sourceDir - Local source directory
    Returns:
    FilesystemList with contents of source directory

    _checkSourceFiles(sourceDir, sourceFiles)

    source code 

    Check source files, trying to guess which ones will have encoding problems.

    Parameters:
    • sourceDir - Local source directory
    • sourceDir - Local source directory
    Raises:

    _synchronizeBucket(sourceDir, s3BucketUrl)

    source code 

    Synchronize a local directory to an Amazon S3 bucket.

    Parameters:
    • sourceDir - Local source directory
    • s3BucketUrl - Target S3 bucket URL

    _verifyBucketContents(sourceDir, sourceFiles, s3BucketUrl)

    source code 

    Verify that a source directory is equivalent to an Amazon S3 bucket.

    Parameters:
    • sourceDir - Local source directory
    • sourceFiles - Filesystem list containing contents of source directory
    • s3BucketUrl - Target S3 bucket URL

    Variables Details [hide private]

    LONG_SWITCHES

    Value:
    ['help',
     'version',
     'verbose',
     'quiet',
     'logfile=',
     'owner=',
     'mode=',
     'output',
    ...
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.amazons3.LocalConfig-class.html0000664000175000017500000007560612642035644031725 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.amazons3.LocalConfig
    Package CedarBackup2 :: Package extend :: Module amazons3 :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit amazons3-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds an <amazons3> configuration section as the next child of a parent.
    source code
     
    _setAmazonS3(self, value)
    Property target used to set the amazons3 configuration value.
    source code
     
    _getAmazonS3(self)
    Property target used to get the amazons3 configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseAmazonS3(parent)
    Parses an amazons3 configuration section.
    source code
    Properties [hide private]
      amazons3
    AmazonS3 configuration in terms of a AmazonS3Config object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    AmazonS3 configuration must be filled in. Within that, the s3Bucket target must be filled in

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds an <amazons3> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      warnMidnite                 //cb_config/amazons3/warn_midnite
      s3Bucket                    //cb_config/amazons3/s3_bucket
      encryptCommand              //cb_config/amazons3/encrypt
      fullBackupSizeLimit         //cb_config/amazons3/full_size_limit
      incrementalBackupSizeLimit  //cb_config/amazons3/incr_size_limit
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setAmazonS3(self, value)

    source code 

    Property target used to set the amazons3 configuration value. If not None, the value must be a AmazonS3Config object.

    Raises:
    • ValueError - If the value is not a AmazonS3Config

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the amazons3 configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseAmazonS3(parent)
    Static Method

    source code 

    Parses an amazons3 configuration section.

    We read the following individual fields:

      warnMidnite                 //cb_config/amazons3/warn_midnite
      s3Bucket                    //cb_config/amazons3/s3_bucket
      encryptCommand              //cb_config/amazons3/encrypt
      fullBackupSizeLimit         //cb_config/amazons3/full_size_limit
      incrementalBackupSizeLimit  //cb_config/amazons3/incr_size_limit
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    AmazonS3Config object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    amazons3

    AmazonS3 configuration in terms of a AmazonS3Config object.

    Get Method:
    _getAmazonS3(self) - Property target used to get the amazons3 configuration value.
    Set Method:
    _setAmazonS3(self, value) - Property target used to set the amazons3 configuration value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.tools.amazons3.Options-class.html0000664000175000017500000025255712642035644031053 0ustar pronovicpronovic00000000000000 CedarBackup2.tools.amazons3.Options
    Package CedarBackup2 :: Package tools :: Module amazons3 :: Class Options
    [hide private]
    [frames] | no frames]

    Class Options

    source code

    object --+
             |
            Options
    

    Class representing command-line options for the cback-amazons3-sync script.

    The Options class is a Python object representation of the command-line options of the cback script.

    The object representation is two-way: a command line string or a list of command line arguments can be used to create an Options object, and then changes to the object can be propogated back to a list of command-line arguments or to a command-line string. An Options object can even be created from scratch programmatically (if you have a need for that).

    There are two main levels of validation in the Options class. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's property functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a ValueError exception when making assignments to fields if you are programmatically filling an object.

    The second level of validation is post-completion validation. Certain validations don't make sense until an object representation of options is fully "complete". We don't want these validations to apply all of the time, because it would make building up a valid object from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc.

    All of these post-completion validations are encapsulated in the Options.validate method. This method can be called at any time by a client, and will always be called immediately after creating a Options object from a command line and before exporting a Options object back to a command line. This way, we get acceptable ease-of-use but we also don't accept or emit invalid command lines.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, argumentList=None, argumentString=None, validate=True)
    Initializes an options object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setHelp(self, value)
    Property target used to set the help flag.
    source code
     
    _getHelp(self)
    Property target used to get the help flag.
    source code
     
    _setVersion(self, value)
    Property target used to set the version flag.
    source code
     
    _getVersion(self)
    Property target used to get the version flag.
    source code
     
    _setVerbose(self, value)
    Property target used to set the verbose flag.
    source code
     
    _getVerbose(self)
    Property target used to get the verbose flag.
    source code
     
    _setQuiet(self, value)
    Property target used to set the quiet flag.
    source code
     
    _getQuiet(self)
    Property target used to get the quiet flag.
    source code
     
    _setLogfile(self, value)
    Property target used to set the logfile parameter.
    source code
     
    _getLogfile(self)
    Property target used to get the logfile parameter.
    source code
     
    _setOwner(self, value)
    Property target used to set the owner parameter.
    source code
     
    _getOwner(self)
    Property target used to get the owner parameter.
    source code
     
    _setMode(self, value)
    Property target used to set the mode parameter.
    source code
     
    _getMode(self)
    Property target used to get the mode parameter.
    source code
     
    _setOutput(self, value)
    Property target used to set the output flag.
    source code
     
    _getOutput(self)
    Property target used to get the output flag.
    source code
     
    _setDebug(self, value)
    Property target used to set the debug flag.
    source code
     
    _getDebug(self)
    Property target used to get the debug flag.
    source code
     
    _setStacktrace(self, value)
    Property target used to set the stacktrace flag.
    source code
     
    _getStacktrace(self)
    Property target used to get the stacktrace flag.
    source code
     
    _setDiagnostics(self, value)
    Property target used to set the diagnostics flag.
    source code
     
    _getDiagnostics(self)
    Property target used to get the diagnostics flag.
    source code
     
    _setVerifyOnly(self, value)
    Property target used to set the verifyOnly flag.
    source code
     
    _getVerifyOnly(self)
    Property target used to get the verifyOnly flag.
    source code
     
    _setIgnoreWarnings(self, value)
    Property target used to set the ignoreWarnings flag.
    source code
     
    _getIgnoreWarnings(self)
    Property target used to get the ignoreWarnings flag.
    source code
     
    _setSourceDir(self, value)
    Property target used to set the sourceDir parameter.
    source code
     
    _getSourceDir(self)
    Property target used to get the sourceDir parameter.
    source code
     
    _setS3BucketUrl(self, value)
    Property target used to set the s3BucketUrl parameter.
    source code
     
    _getS3BucketUrl(self)
    Property target used to get the s3BucketUrl parameter.
    source code
     
    validate(self)
    Validates command-line options represented by the object.
    source code
     
    buildArgumentList(self, validate=True)
    Extracts options into a list of command line arguments.
    source code
     
    buildArgumentString(self, validate=True)
    Extracts options into a string of command-line arguments.
    source code
     
    _parseArgumentList(self, argumentList)
    Internal method to parse a list of command-line arguments.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      help
    Command-line help (-h,--help) flag.
      version
    Command-line version (-V,--version) flag.
      verbose
    Command-line verbose (-b,--verbose) flag.
      quiet
    Command-line quiet (-q,--quiet) flag.
      logfile
    Command-line logfile (-l,--logfile) parameter.
      owner
    Command-line owner (-o,--owner) parameter, as tuple (user,group).
      mode
    Command-line mode (-m,--mode) parameter.
      output
    Command-line output (-O,--output) flag.
      debug
    Command-line debug (-d,--debug) flag.
      stacktrace
    Command-line stacktrace (-s,--stack) flag.
      diagnostics
    Command-line diagnostics (-D,--diagnostics) flag.
      verifyOnly
    Command-line verifyOnly (-v,--verifyOnly) flag.
      ignoreWarnings
    Command-line ignoreWarnings (-w,--ignoreWarnings) flag.
      sourceDir
    Command-line sourceDir, source of sync.
      s3BucketUrl
    Command-line s3BucketUrl, target of sync.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, argumentList=None, argumentString=None, validate=True)
    (Constructor)

    source code 

    Initializes an options object.

    If you initialize the object without passing either argumentList or argumentString, the object will be empty and will be invalid until it is filled in properly.

    No reference to the original arguments is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    The argument list is assumed to be a list of arguments, not including the name of the command, something like sys.argv[1:]. If you pass sys.argv instead, things are not going to work.

    The argument string will be parsed into an argument list by the util.splitCommandLine function (see the documentation for that function for some important notes about its limitations). There is an assumption that the resulting list will be equivalent to sys.argv[1:], just like argumentList.

    Unless the validate argument is False, the Options.validate method will be called (with its default arguments) after successfully parsing any passed-in command line. This validation ensures that appropriate actions, etc. have been specified. Keep in mind that even if validate is False, it might not be possible to parse the passed-in command line, so an exception might still be raised.

    Parameters:
    • argumentList (List of arguments, i.e. sys.argv) - Command line for a program.
    • argumentString (String, i.e. "cback --verbose stage store") - Command line for a program.
    • validate (Boolean true/false.) - Validate the command line after parsing it.
    Raises:
    • getopt.GetoptError - If the command-line arguments could not be parsed.
    • ValueError - If the command-line arguments are invalid.
    Overrides: object.__init__
    Notes:
    • The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback script.
    • It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid command line arguments.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setHelp(self, value)

    source code 

    Property target used to set the help flag. No validations, but we normalize the value to True or False.

    _setVersion(self, value)

    source code 

    Property target used to set the version flag. No validations, but we normalize the value to True or False.

    _setVerbose(self, value)

    source code 

    Property target used to set the verbose flag. No validations, but we normalize the value to True or False.

    _setQuiet(self, value)

    source code 

    Property target used to set the quiet flag. No validations, but we normalize the value to True or False.

    _setLogfile(self, value)

    source code 

    Property target used to set the logfile parameter.

    Raises:
    • ValueError - If the value cannot be encoded properly.

    _setOwner(self, value)

    source code 

    Property target used to set the owner parameter. If not None, the owner must be a (user,group) tuple or list. Strings (and inherited children of strings) are explicitly disallowed. The value will be normalized to a tuple.

    Raises:
    • ValueError - If the value is not valid.

    _getOwner(self)

    source code 

    Property target used to get the owner parameter. The parameter is a tuple of (user, group).

    _setOutput(self, value)

    source code 

    Property target used to set the output flag. No validations, but we normalize the value to True or False.

    _setDebug(self, value)

    source code 

    Property target used to set the debug flag. No validations, but we normalize the value to True or False.

    _setStacktrace(self, value)

    source code 

    Property target used to set the stacktrace flag. No validations, but we normalize the value to True or False.

    _setDiagnostics(self, value)

    source code 

    Property target used to set the diagnostics flag. No validations, but we normalize the value to True or False.

    _setVerifyOnly(self, value)

    source code 

    Property target used to set the verifyOnly flag. No validations, but we normalize the value to True or False.

    _setIgnoreWarnings(self, value)

    source code 

    Property target used to set the ignoreWarnings flag. No validations, but we normalize the value to True or False.

    validate(self)

    source code 

    Validates command-line options represented by the object.

    Unless --help or --version are supplied, at least one action must be specified. Other validations (as for allowed values for particular options) will be taken care of at assignment time by the properties functionality.

    Raises:
    • ValueError - If one of the validations fails.

    Note: The command line format is specified by the _usage function. Call _usage to see a usage statement for the cback script.

    buildArgumentList(self, validate=True)

    source code 

    Extracts options into a list of command line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument list. Besides that, the argument list is normalized to use the long option names (i.e. --version rather than -V). The resulting list will be suitable for passing back to the constructor in the argumentList parameter. Unlike buildArgumentString, string arguments are not quoted here, because there is no need for it.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument list will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    List representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    buildArgumentString(self, validate=True)

    source code 

    Extracts options into a string of command-line arguments.

    The original order of the various arguments (if, indeed, the object was initialized with a command-line) is not preserved in this generated argument string. Besides that, the argument string is normalized to use the long option names (i.e. --version rather than -V) and to quote all string arguments with double quotes ("). The resulting string will be suitable for passing back to the constructor in the argumentString parameter.

    Unless the validate parameter is False, the Options.validate method will be called (with its default arguments) against the options before extracting the command line. If the options are not valid, then an argument string will not be extracted.

    Parameters:
    • validate (Boolean true/false.) - Validate the options before extracting the command line.
    Returns:
    String representation of command-line arguments.
    Raises:
    • ValueError - If options within the object are invalid.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to extract an invalid command line.

    _parseArgumentList(self, argumentList)

    source code 

    Internal method to parse a list of command-line arguments.

    Most of the validation we do here has to do with whether the arguments can be parsed and whether any values which exist are valid. We don't do any validation as to whether required elements exist or whether elements exist in the proper combination (instead, that's the job of the validate method).

    For any of the options which supply parameters, if the option is duplicated with long and short switches (i.e. -l and a --logfile) then the long switch is used. If the same option is duplicated with the same switch (long or short), then the last entry on the command line is used.

    Parameters:
    • argumentList (List of arguments to a command, i.e. sys.argv[1:]) - List of arguments to a command.
    Raises:
    • ValueError - If the argument list cannot be successfully parsed.

    Property Details [hide private]

    help

    Command-line help (-h,--help) flag.

    Get Method:
    _getHelp(self) - Property target used to get the help flag.
    Set Method:
    _setHelp(self, value) - Property target used to set the help flag.

    version

    Command-line version (-V,--version) flag.

    Get Method:
    _getVersion(self) - Property target used to get the version flag.
    Set Method:
    _setVersion(self, value) - Property target used to set the version flag.

    verbose

    Command-line verbose (-b,--verbose) flag.

    Get Method:
    _getVerbose(self) - Property target used to get the verbose flag.
    Set Method:
    _setVerbose(self, value) - Property target used to set the verbose flag.

    quiet

    Command-line quiet (-q,--quiet) flag.

    Get Method:
    _getQuiet(self) - Property target used to get the quiet flag.
    Set Method:
    _setQuiet(self, value) - Property target used to set the quiet flag.

    logfile

    Command-line logfile (-l,--logfile) parameter.

    Get Method:
    _getLogfile(self) - Property target used to get the logfile parameter.
    Set Method:
    _setLogfile(self, value) - Property target used to set the logfile parameter.

    owner

    Command-line owner (-o,--owner) parameter, as tuple (user,group).

    Get Method:
    _getOwner(self) - Property target used to get the owner parameter.
    Set Method:
    _setOwner(self, value) - Property target used to set the owner parameter.

    mode

    Command-line mode (-m,--mode) parameter.

    Get Method:
    _getMode(self) - Property target used to get the mode parameter.
    Set Method:
    _setMode(self, value) - Property target used to set the mode parameter.

    output

    Command-line output (-O,--output) flag.

    Get Method:
    _getOutput(self) - Property target used to get the output flag.
    Set Method:
    _setOutput(self, value) - Property target used to set the output flag.

    debug

    Command-line debug (-d,--debug) flag.

    Get Method:
    _getDebug(self) - Property target used to get the debug flag.
    Set Method:
    _setDebug(self, value) - Property target used to set the debug flag.

    stacktrace

    Command-line stacktrace (-s,--stack) flag.

    Get Method:
    _getStacktrace(self) - Property target used to get the stacktrace flag.
    Set Method:
    _setStacktrace(self, value) - Property target used to set the stacktrace flag.

    diagnostics

    Command-line diagnostics (-D,--diagnostics) flag.

    Get Method:
    _getDiagnostics(self) - Property target used to get the diagnostics flag.
    Set Method:
    _setDiagnostics(self, value) - Property target used to set the diagnostics flag.

    verifyOnly

    Command-line verifyOnly (-v,--verifyOnly) flag.

    Get Method:
    _getVerifyOnly(self) - Property target used to get the verifyOnly flag.
    Set Method:
    _setVerifyOnly(self, value) - Property target used to set the verifyOnly flag.

    ignoreWarnings

    Command-line ignoreWarnings (-w,--ignoreWarnings) flag.

    Get Method:
    _getIgnoreWarnings(self) - Property target used to get the ignoreWarnings flag.
    Set Method:
    _setIgnoreWarnings(self, value) - Property target used to set the ignoreWarnings flag.

    sourceDir

    Command-line sourceDir, source of sync.

    Get Method:
    _getSourceDir(self) - Property target used to get the sourceDir parameter.
    Set Method:
    _setSourceDir(self, value) - Property target used to set the sourceDir parameter.

    s3BucketUrl

    Command-line s3BucketUrl, target of sync.

    Get Method:
    _getS3BucketUrl(self) - Property target used to get the s3BucketUrl parameter.
    Set Method:
    _setS3BucketUrl(self, value) - Property target used to set the s3BucketUrl parameter.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.RestrictedContentList-class.html0000664000175000017500000004427512642035644031776 0ustar pronovicpronovic00000000000000 CedarBackup2.util.RestrictedContentList
    Package CedarBackup2 :: Module util :: Class RestrictedContentList
    [hide private]
    [frames] | no frames]

    Class RestrictedContentList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    RestrictedContentList
    

    Class representing a list containing only object with certain values.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list is among the valid values. We use a standard comparison, so pretty much anything can be in the list of valid values.

    The valuesDescr value will be used in exceptions, i.e. "Item must be one of values in VALID_ACTIONS" if valuesDescr is "VALID_ACTIONS".


    Note: This class doesn't make any attempt to trap for nonsensical arguments. All of the values in the values list should be of the same type (i.e. strings). Then, all list operations also need to be of that type (i.e. you should always insert or append just strings). If you mix types -- for instance lists and strings -- you will likely see AttributeError exceptions or other problems.

    Instance Methods [hide private]
    new empty list
    __init__(self, valuesList, valuesDescr, prefix=None)
    Initializes a list restricted to containing certain values.
    source code
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, valuesList, valuesDescr, prefix=None)
    (Constructor)

    source code 

    Initializes a list restricted to containing certain values.

    Parameters:
    • valuesList - List of valid values.
    • valuesDescr - Short string describing list of values.
    • prefix - Prefix to use in error messages (None results in prefix "Item")
    Returns: new empty list
    Overrides: object.__init__

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is not in the values list.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not in the values list.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not in the values list.
    Overrides: list.extend

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.cdwriter.MediaCapacity-class.html0000664000175000017500000005125512642035644032534 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter.MediaCapacity
    Package CedarBackup2 :: Package writers :: Module cdwriter :: Class MediaCapacity
    [hide private]
    [frames] | no frames]

    Class MediaCapacity

    source code

    object --+
             |
            MediaCapacity
    

    Class encapsulating information about CD media capacity.

    Space used includes the required media lead-in (unless the disk is unused). Space available attempts to provide a picture of how many bytes are available for data storage, including any required lead-in.

    The boundaries value is either None (if multisession discs are not supported or if the disc has no boundaries) or in exactly the form provided by cdrecord -msinfo. It can be passed as-is to the IsoImage class.

    Instance Methods [hide private]
     
    __init__(self, bytesUsed, bytesAvailable, boundaries)
    Initializes a capacity object.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    _getBytesUsed(self)
    Property target to get the bytes-used value.
    source code
     
    _getBytesAvailable(self)
    Property target to get the bytes-available value.
    source code
     
    _getBoundaries(self)
    Property target to get the boundaries tuple.
    source code
     
    _getTotalCapacity(self)
    Property target to get the total capacity (used + available).
    source code
     
    _getUtilized(self)
    Property target to get the percent of capacity which is utilized.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      bytesUsed
    Space used on disc, in bytes.
      bytesAvailable
    Space available on disc, in bytes.
      boundaries
    Session disc boundaries, in terms of ISO sectors.
      totalCapacity
    Total capacity of the disc, in bytes.
      utilized
    Percentage of the total capacity which is utilized.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, bytesUsed, bytesAvailable, boundaries)
    (Constructor)

    source code 

    Initializes a capacity object.

    Raises:
    • IndexError - If the boundaries tuple does not have enough elements.
    • ValueError - If the boundaries values are not integers.
    • ValueError - If the bytes used and available values are not floats.
    Overrides: object.__init__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    Property Details [hide private]

    bytesUsed

    Space used on disc, in bytes.

    Get Method:
    _getBytesUsed(self) - Property target to get the bytes-used value.

    bytesAvailable

    Space available on disc, in bytes.

    Get Method:
    _getBytesAvailable(self) - Property target to get the bytes-available value.

    boundaries

    Session disc boundaries, in terms of ISO sectors.

    Get Method:
    _getBoundaries(self) - Property target to get the boundaries tuple.

    totalCapacity

    Total capacity of the disc, in bytes.

    Get Method:
    _getTotalCapacity(self) - Property target to get the total capacity (used + available).

    utilized

    Percentage of the total capacity which is utilized.

    Get Method:
    _getUtilized(self) - Property target to get the percent of capacity which is utilized.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config-module.html0000664000175000017500000012422012642035643026134 0ustar pronovicpronovic00000000000000 CedarBackup2.config
    Package CedarBackup2 :: Module config
    [hide private]
    [frames] | no frames]

    Module config

    source code

    Provides configuration-related objects.

    Summary

    Cedar Backup stores all of its configuration in an XML document typically called cback.conf. The standard location for this document is in /etc, but users can specify a different location if they want to.

    The Config class is a Python object representation of a Cedar Backup XML configuration file. The representation is two-way: XML data can be used to create a Config object, and then changes to the object can be propogated back to disk. A Config object can even be used to create a configuration file from scratch programmatically.

    The Config class is intended to be the only Python-language interface to Cedar Backup configuration on disk. Cedar Backup will use the class as its internal representation of configuration, and applications external to Cedar Backup itself (such as a hypothetical third-party configuration tool written in Python or a third party extension module) should also use the class when they need to read and write configuration files.

    Backwards Compatibility

    The configuration file format has changed between Cedar Backup 1.x and Cedar Backup 2.x. Any Cedar Backup 1.x configuration file is also a valid Cedar Backup 2.x configuration file. However, it doesn't work to go the other direction, as the 2.x configuration files contains additional configuration is not accepted by older versions of the software.

    XML Configuration Structure

    A Config object can either be created "empty", or can be created based on XML input (either in the form of a string or read in from a file on disk). Generally speaking, the XML input must result in a Config object which passes the validations laid out below in the Validation section.

    An XML configuration file is composed of seven sections:

    • reference: specifies reference information about the file (author, revision, etc)
    • extensions: specifies mappings to Cedar Backup extensions (external code)
    • options: specifies global configuration options
    • peers: specifies the set of peers in a master's backup pool
    • collect: specifies configuration related to the collect action
    • stage: specifies configuration related to the stage action
    • store: specifies configuration related to the store action
    • purge: specifies configuration related to the purge action

    Each section is represented by an class in this module, and then the overall Config class is a composition of the various other classes.

    Any configuration section that is missing in the XML document (or has not been filled into an "empty" document) will just be set to None in the object representation. The same goes for individual fields within each configuration section. Keep in mind that the document might not be completely valid if some sections or fields aren't filled in - but that won't matter until validation takes place (see the Validation section below).

    Unicode vs. String Data

    By default, all string data that comes out of XML documents in Python is unicode data (i.e. u"whatever"). This is fine for many things, but when it comes to filesystem paths, it can cause us some problems. We really want strings to be encoded in the filesystem encoding rather than being unicode. So, most elements in configuration which represent filesystem paths are coverted to plain strings using util.encodePath. The main exception is the various absoluteExcludePath and relativeExcludePath lists. These are not converted, because they are generally only used for filtering, not for filesystem operations.

    Validation

    There are two main levels of validation in the Config class and its children. The first is field-level validation. Field-level validation comes into play when a given field in an object is assigned to or updated. We use Python's property functionality to enforce specific validations on field values, and in some places we even use customized list classes to enforce validations on list members. You should expect to catch a ValueError exception when making assignments to configuration class fields.

    The second level of validation is post-completion validation. Certain validations don't make sense until a document is fully "complete". We don't want these validations to apply all of the time, because it would make building up a document from scratch a real pain. For instance, we might have to do things in the right order to keep from throwing exceptions, etc.

    All of these post-completion validations are encapsulated in the Config.validate method. This method can be called at any time by a client, and will always be called immediately after creating a Config object from XML data and before exporting a Config object to XML. This way, we get decent ease-of-use but we also don't accept or emit invalid configuration files.

    The Config.validate implementation actually takes two passes to completely validate a configuration document. The first pass at validation is to ensure that the proper sections are filled into the document. There are default requirements, but the caller has the opportunity to override these defaults.

    The second pass at validation ensures that any filled-in section contains valid data. Any section which is not set to None is validated according to the rules for that section (see below).

    Reference Validations

    No validations.

    Extensions Validations

    The list of actions may be either None or an empty list [] if desired. Each extended action must include a name, a module and a function. Then, an extended action must include either an index or dependency information. Which one is required depends on which order mode is configured.

    Options Validations

    All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose.

    Peers Validations

    Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section.

    Collect Validations

    The target directory must be filled in. The collect mode, archive mode and ignore file are all optional. The list of absolute paths to exclude and patterns to exclude may be either None or an empty list [] if desired.

    Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent CollectConfig object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either None or an empty list [] if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the CollectConfig object to make the complete list for a given directory.

    Stage Validations

    The target directory must be filled in. There must be at least one peer (remote or local) between the two lists of peers. A list with no entries can be either None or an empty list [] if desired.

    If a set of peers is provided, this configuration completely overrides configuration in the peers configuration section, and the same validations apply.

    Store Validations

    The device type and drive speed are optional, and all other values are required (missing booleans will be set to defaults, which is OK).

    The image writer functionality in the writer module is supposed to be able to handle a device speed of None. Any caller which needs a "real" (non-None) value for the device type can use DEFAULT_DEVICE_TYPE, which is guaranteed to be sensible.

    Purge Validations

    The list of purge directories may be either None or an empty list [] if desired. All purge directories must contain a path and a retain days value.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      ActionDependencies
    Class representing dependencies associated with an extended action.
      ActionHook
    Class representing a hook associated with an action.
      PreActionHook
    Class representing a pre-action hook associated with an action.
      PostActionHook
    Class representing a pre-action hook associated with an action.
      ExtendedAction
    Class representing an extended action.
      CommandOverride
    Class representing a piece of Cedar Backup command override configuration.
      CollectFile
    Class representing a Cedar Backup collect file.
      CollectDir
    Class representing a Cedar Backup collect directory.
      PurgeDir
    Class representing a Cedar Backup purge directory.
      LocalPeer
    Class representing a Cedar Backup peer.
      RemotePeer
    Class representing a Cedar Backup peer.
      ReferenceConfig
    Class representing a Cedar Backup reference configuration.
      ExtensionsConfig
    Class representing Cedar Backup extensions configuration.
      OptionsConfig
    Class representing a Cedar Backup global options configuration.
      PeersConfig
    Class representing Cedar Backup global peer configuration.
      CollectConfig
    Class representing a Cedar Backup collect configuration.
      StageConfig
    Class representing a Cedar Backup stage configuration.
      StoreConfig
    Class representing a Cedar Backup store configuration.
      PurgeConfig
    Class representing a Cedar Backup purge configuration.
      Config
    Class representing a Cedar Backup XML configuration document.
      ByteQuantity
    Class representing a byte quantity.
      BlankBehavior
    Class representing optimized store-action media blanking behavior.
    Functions [hide private]
     
    readByteQuantity(parent, name)
    Read a byte size value from an XML document.
    source code
     
    addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity)
    Adds a text node as the next child of a parent, to contain a byte size.
    source code
    Variables [hide private]
      DEFAULT_DEVICE_TYPE = 'cdwriter'
    The default device type.
      DEFAULT_MEDIA_TYPE = 'cdrw-74'
    The default media type.
      VALID_DEVICE_TYPES = ['cdwriter', 'dvdwriter']
    List of valid device types.
      VALID_MEDIA_TYPES = ['cdr-74', 'cdrw-74', 'cdr-80', 'cdrw-80',...
    List of valid media types.
      VALID_COLLECT_MODES = ['daily', 'weekly', 'incr']
    List of valid collect modes.
      VALID_ARCHIVE_MODES = ['tar', 'targz', 'tarbz2']
    List of valid archive modes.
      VALID_ORDER_MODES = ['index', 'dependency']
    List of valid extension order modes.
      logger = logging.getLogger("CedarBackup2.log.config")
      VALID_CD_MEDIA_TYPES = ['cdr-74', 'cdrw-74', 'cdr-80', 'cdrw-80']
      VALID_DVD_MEDIA_TYPES = ['dvd+r', 'dvd+rw']
      VALID_COMPRESS_MODES = ['none', 'gzip', 'bzip2']
    List of valid compress modes.
      VALID_BLANK_MODES = ['daily', 'weekly']
      VALID_BYTE_UNITS = [0, 1, 2, 4]
      VALID_FAILURE_MODES = ['none', 'all', 'daily', 'weekly']
      REWRITABLE_MEDIA_TYPES = ['cdrw-74', 'cdrw-80', 'dvd+rw']
      ACTION_NAME_REGEX = '^[a-z0-9]*$'
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    readByteQuantity(parent, name)

    source code 

    Read a byte size value from an XML document.

    A byte size value is an interpreted string value. If the string value ends with "MB" or "GB", then the string before that is interpreted as megabytes or gigabytes. Otherwise, it is intepreted as bytes.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    ByteQuantity parsed from XML document

    addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity)

    source code 

    Adds a text node as the next child of a parent, to contain a byte size.

    If the byteQuantity is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    The size in bytes will be normalized. If it is larger than 1.0 GB, it will be shown in GB ("1.0 GB"). If it is larger than 1.0 MB ("1.0 MB"), it will be shown in MB. Otherwise, it will be shown in bytes ("423413").

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • byteQuantity - ByteQuantity object to put into the XML document
    Returns:
    Reference to the newly-created node.

    Variables Details [hide private]

    VALID_MEDIA_TYPES

    List of valid media types.
    Value:
    ['cdr-74', 'cdrw-74', 'cdr-80', 'cdrw-80', 'dvd+r', 'dvd+rw']
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.cdwriter-module.html0000664000175000017500000002524612642035643030220 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter
    Package CedarBackup2 :: Package writers :: Module cdwriter
    [hide private]
    [frames] | no frames]

    Module cdwriter

    source code

    Provides functionality related to CD writer devices.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      MediaDefinition
    Class encapsulating information about CD media definitions.
      MediaCapacity
    Class encapsulating information about CD media capacity.
      CdWriter
    Class representing a device that knows how to write CD media.
      _ImageProperties
    Simple value object to hold image properties for DvdWriter.
    Variables [hide private]
      MEDIA_CDRW_74 = 1
    Constant representing 74-minute CD-RW media.
      MEDIA_CDR_74 = 2
    Constant representing 74-minute CD-R media.
      MEDIA_CDRW_80 = 3
    Constant representing 80-minute CD-RW media.
      MEDIA_CDR_80 = 4
    Constant representing 80-minute CD-R media.
      logger = logging.getLogger("CedarBackup2.log.writers.cdwriter")
      CDRECORD_COMMAND = ['cdrecord']
      EJECT_COMMAND = ['eject']
      MKISOFS_COMMAND = ['mkisofs']
      __package__ = 'CedarBackup2.writers'
    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend.capacity-module.html0000664000175000017500000000333312642035643030536 0ustar pronovicpronovic00000000000000 capacity

    Module capacity


    Classes

    CapacityConfig
    LocalConfig
    PercentageQuantity

    Functions

    executeAction

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.release-module.html0000664000175000017500000000320712642035643027073 0ustar pronovicpronovic00000000000000 release

    Module release


    Variables

    AUTHOR
    COPYRIGHT
    DATE
    EMAIL
    URL
    VERSION
    __package__

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.knapsack-module.html0000664000175000017500000005336512642035643026475 0ustar pronovicpronovic00000000000000 CedarBackup2.knapsack
    Package CedarBackup2 :: Module knapsack
    [hide private]
    [frames] | no frames]

    Module knapsack

    source code

    Provides the implementation for various knapsack algorithms.

    Knapsack algorithms are "fit" algorithms, used to take a set of "things" and decide on the optimal way to fit them into some container. The focus of this code is to fit files onto a disc, although the interface (in terms of item, item size and capacity size, with no units) is generic enough that it can be applied to items other than files.

    All of the algorithms implemented below assume that "optimal" means "use up as much of the disc's capacity as possible", but each produces slightly different results. For instance, the best fit and first fit algorithms tend to include fewer files than the worst fit and alternate fit algorithms, even if they use the disc space more efficiently.

    Usually, for a given set of circumstances, it will be obvious to a human which algorithm is the right one to use, based on trade-offs between number of files included and ideal space utilization. It's a little more difficult to do this programmatically. For Cedar Backup's purposes (i.e. trying to fit a small number of collect-directory tarfiles onto a disc), worst-fit is probably the best choice if the goal is to include as many of the collect directories as possible.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    firstFit(items, capacity)
    Implements the first-fit knapsack algorithm.
    source code
     
    bestFit(items, capacity)
    Implements the best-fit knapsack algorithm.
    source code
     
    worstFit(items, capacity)
    Implements the worst-fit knapsack algorithm.
    source code
     
    alternateFit(items, capacity)
    Implements the alternate-fit knapsack algorithm.
    source code
    Variables [hide private]
      __package__ = None
    hash(x)
    Function Details [hide private]

    firstFit(items, capacity)

    source code 

    Implements the first-fit knapsack algorithm.

    The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    bestFit(items, capacity)

    source code 

    Implements the best-fit knapsack algorithm.

    The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not ususual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    worstFit(items, capacity)

    source code 

    Implements the worst-fit knapsack algorithm.

    The worst-fit algorithm proceeds through an a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    alternateFit(items, capacity)

    source code 

    Implements the alternate-fit knapsack algorithm.

    This algorithm (which I'm calling "alternate-fit" as in "alternate from one to the other") tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slighly fewer items.

    The "size" values in the items and capacity arguments must be comparable, but they are unitless from the perspective of this function. Zero-sized items and capacity are considered degenerate cases. If capacity is zero, no items fit, period, even if the items list contains zero-sized items.

    The dictionary is indexed by its key, and then includes its key. This seems kind of strange on first glance. It works this way to facilitate easy sorting of the list on key if needed.

    The function assumes that the list of items may be used destructively, if needed. This avoids the overhead of having the function make a copy of the list, if this is not required. Callers should pass items.copy() if they do not want their version of the list modified.

    The function returns a list of chosen items and the unitless amount of capacity used by the items.

    Parameters:
    • items (dictionary, keyed on item, of (item, size) tuples, item as string and size as integer) - Items to operate on
    • capacity (integer) - Capacity of container to fit to
    Returns:
    Tuple (items, used) as described above

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.xmlutil-pysrc.html0000664000175000017500000060222212642035644026244 0ustar pronovicpronovic00000000000000 CedarBackup2.xmlutil
    Package CedarBackup2 :: Module xmlutil
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.xmlutil

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2006,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # Portions Copyright (c) 2000 Fourthought Inc, USA. 
     15  # All Rights Reserved. 
     16  # 
     17  # This program is free software; you can redistribute it and/or 
     18  # modify it under the terms of the GNU General Public License, 
     19  # Version 2, as published by the Free Software Foundation. 
     20  # 
     21  # This program is distributed in the hope that it will be useful, 
     22  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     23  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     24  # 
     25  # Copies of the GNU General Public License are available from 
     26  # the Free Software Foundation website, http://www.gnu.org/. 
     27  # 
     28  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     29  # 
     30  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     31  # Language : Python 2 (>= 2.7) 
     32  # Project  : Cedar Backup, release 2 
     33  # Purpose  : Provides general XML-related functionality. 
     34  # 
     35  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     36   
     37  ######################################################################## 
     38  # Module documentation 
     39  ######################################################################## 
     40   
     41  """ 
     42  Provides general XML-related functionality. 
     43   
     44  What I'm trying to do here is abstract much of the functionality that directly 
     45  accesses the DOM tree.  This is not so much to "protect" the other code from 
     46  the DOM, but to standardize the way it's used.  It will also help extension 
     47  authors write code that easily looks more like the rest of Cedar Backup. 
     48   
     49  @sort: createInputDom, createOutputDom, serializeDom, isElement, readChildren, 
     50         readFirstChild, readStringList, readString, readInteger, readBoolean, 
     51         addContainerNode, addStringNode, addIntegerNode, addBooleanNode, 
     52         TRUE_BOOLEAN_VALUES, FALSE_BOOLEAN_VALUES, VALID_BOOLEAN_VALUES 
     53   
     54  @var TRUE_BOOLEAN_VALUES: List of boolean values in XML representing C{True}. 
     55  @var FALSE_BOOLEAN_VALUES: List of boolean values in XML representing C{False}. 
     56  @var VALID_BOOLEAN_VALUES: List of valid boolean values in XML. 
     57   
     58  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     59  """ 
     60  # pylint: disable=C0111,C0103,W0511,W0104,W0106 
     61   
     62  ######################################################################## 
     63  # Imported modules 
     64  ######################################################################## 
     65   
     66  # System modules 
     67  import sys 
     68  import re 
     69  import logging 
     70  import codecs 
     71  from types import UnicodeType 
     72  from StringIO import StringIO 
     73   
     74  # XML-related modules 
     75  from xml.parsers.expat import ExpatError 
     76  from xml.dom.minidom import Node 
     77  from xml.dom.minidom import getDOMImplementation 
     78  from xml.dom.minidom import parseString 
     79   
     80   
     81  ######################################################################## 
     82  # Module-wide constants and variables 
     83  ######################################################################## 
     84   
     85  logger = logging.getLogger("CedarBackup2.log.xml") 
     86   
     87  TRUE_BOOLEAN_VALUES   = [ "Y", "y", ] 
     88  FALSE_BOOLEAN_VALUES  = [ "N", "n", ] 
     89  VALID_BOOLEAN_VALUES  = TRUE_BOOLEAN_VALUES + FALSE_BOOLEAN_VALUES 
     90   
     91   
     92  ######################################################################## 
     93  # Functions for creating and parsing DOM trees 
     94  ######################################################################## 
     95   
    
    96 -def createInputDom(xmlData, name="cb_config"):
    97 """ 98 Creates a DOM tree based on reading an XML string. 99 @param name: Assumed base name of the document (root node name). 100 @return: Tuple (xmlDom, parentNode) for the parsed document 101 @raise ValueError: If the document can't be parsed. 102 """ 103 try: 104 xmlDom = parseString(xmlData) 105 parentNode = readFirstChild(xmlDom, name) 106 return (xmlDom, parentNode) 107 except (IOError, ExpatError), e: 108 raise ValueError("Unable to parse XML document: %s" % e)
    109
    110 -def createOutputDom(name="cb_config"):
    111 """ 112 Creates a DOM tree used for writing an XML document. 113 @param name: Base name of the document (root node name). 114 @return: Tuple (xmlDom, parentNode) for the new document 115 """ 116 impl = getDOMImplementation() 117 xmlDom = impl.createDocument(None, name, None) 118 return (xmlDom, xmlDom.documentElement)
    119 120 121 ######################################################################## 122 # Functions for reading values out of XML documents 123 ######################################################################## 124
    125 -def isElement(node):
    126 """ 127 Returns True or False depending on whether the XML node is an element node. 128 """ 129 return node.nodeType == Node.ELEMENT_NODE
    130
    131 -def readChildren(parent, name):
    132 """ 133 Returns a list of nodes with a given name immediately beneath the 134 parent. 135 136 By "immediately beneath" the parent, we mean from among nodes that are 137 direct children of the passed-in parent node. 138 139 Underneath, we use the Python C{getElementsByTagName} method, which is 140 pretty cool, but which (surprisingly?) returns a list of all children 141 with a given name below the parent, at any level. We just prune that 142 list to include only children whose C{parentNode} matches the passed-in 143 parent. 144 145 @param parent: Parent node to search beneath. 146 @param name: Name of nodes to search for. 147 148 @return: List of child nodes with correct parent, or an empty list if 149 no matching nodes are found. 150 """ 151 lst = [] 152 if parent is not None: 153 result = parent.getElementsByTagName(name) 154 for entry in result: 155 if entry.parentNode is parent: 156 lst.append(entry) 157 return lst
    158
    159 -def readFirstChild(parent, name):
    160 """ 161 Returns the first child with a given name immediately beneath the parent. 162 163 By "immediately beneath" the parent, we mean from among nodes that are 164 direct children of the passed-in parent node. 165 166 @param parent: Parent node to search beneath. 167 @param name: Name of node to search for. 168 169 @return: First properly-named child of parent, or C{None} if no matching nodes are found. 170 """ 171 result = readChildren(parent, name) 172 if result is None or result == []: 173 return None 174 return result[0]
    175
    176 -def readStringList(parent, name):
    177 """ 178 Returns a list of the string contents associated with nodes with a given 179 name immediately beneath the parent. 180 181 By "immediately beneath" the parent, we mean from among nodes that are 182 direct children of the passed-in parent node. 183 184 First, we find all of the nodes using L{readChildren}, and then we 185 retrieve the "string contents" of each of those nodes. The returned list 186 has one entry per matching node. We assume that string contents of a 187 given node belong to the first C{TEXT_NODE} child of that node. Nodes 188 which have no C{TEXT_NODE} children are not represented in the returned 189 list. 190 191 @param parent: Parent node to search beneath. 192 @param name: Name of node to search for. 193 194 @return: List of strings as described above, or C{None} if no matching nodes are found. 195 """ 196 lst = [] 197 result = readChildren(parent, name) 198 for entry in result: 199 if entry.hasChildNodes(): 200 for child in entry.childNodes: 201 if child.nodeType == Node.TEXT_NODE: 202 lst.append(child.nodeValue) 203 break 204 if lst == []: 205 lst = None 206 return lst
    207
    208 -def readString(parent, name):
    209 """ 210 Returns string contents of the first child with a given name immediately 211 beneath the parent. 212 213 By "immediately beneath" the parent, we mean from among nodes that are 214 direct children of the passed-in parent node. We assume that string 215 contents of a given node belong to the first C{TEXT_NODE} child of that 216 node. 217 218 @param parent: Parent node to search beneath. 219 @param name: Name of node to search for. 220 221 @return: String contents of node or C{None} if no matching nodes are found. 222 """ 223 result = readStringList(parent, name) 224 if result is None: 225 return None 226 return result[0]
    227
    228 -def readInteger(parent, name):
    229 """ 230 Returns integer contents of the first child with a given name immediately 231 beneath the parent. 232 233 By "immediately beneath" the parent, we mean from among nodes that are 234 direct children of the passed-in parent node. 235 236 @param parent: Parent node to search beneath. 237 @param name: Name of node to search for. 238 239 @return: Integer contents of node or C{None} if no matching nodes are found. 240 @raise ValueError: If the string at the location can't be converted to an integer. 241 """ 242 result = readString(parent, name) 243 if result is None: 244 return None 245 else: 246 return int(result)
    247
    248 -def readLong(parent, name):
    249 """ 250 Returns long integer contents of the first child with a given name immediately 251 beneath the parent. 252 253 By "immediately beneath" the parent, we mean from among nodes that are 254 direct children of the passed-in parent node. 255 256 @param parent: Parent node to search beneath. 257 @param name: Name of node to search for. 258 259 @return: Long integer contents of node or C{None} if no matching nodes are found. 260 @raise ValueError: If the string at the location can't be converted to an integer. 261 """ 262 result = readString(parent, name) 263 if result is None: 264 return None 265 else: 266 return long(result)
    267
    268 -def readFloat(parent, name):
    269 """ 270 Returns float contents of the first child with a given name immediately 271 beneath the parent. 272 273 By "immediately beneath" the parent, we mean from among nodes that are 274 direct children of the passed-in parent node. 275 276 @param parent: Parent node to search beneath. 277 @param name: Name of node to search for. 278 279 @return: Float contents of node or C{None} if no matching nodes are found. 280 @raise ValueError: If the string at the location can't be converted to a 281 float value. 282 """ 283 result = readString(parent, name) 284 if result is None: 285 return None 286 else: 287 return float(result)
    288
    289 -def readBoolean(parent, name):
    290 """ 291 Returns boolean contents of the first child with a given name immediately 292 beneath the parent. 293 294 By "immediately beneath" the parent, we mean from among nodes that are 295 direct children of the passed-in parent node. 296 297 The string value of the node must be one of the values in L{VALID_BOOLEAN_VALUES}. 298 299 @param parent: Parent node to search beneath. 300 @param name: Name of node to search for. 301 302 @return: Boolean contents of node or C{None} if no matching nodes are found. 303 @raise ValueError: If the string at the location can't be converted to a boolean. 304 """ 305 result = readString(parent, name) 306 if result is None: 307 return None 308 else: 309 if result in TRUE_BOOLEAN_VALUES: 310 return True 311 elif result in FALSE_BOOLEAN_VALUES: 312 return False 313 else: 314 raise ValueError("Boolean values must be one of %s." % VALID_BOOLEAN_VALUES)
    315 316 317 ######################################################################## 318 # Functions for writing values into XML documents 319 ######################################################################## 320
    321 -def addContainerNode(xmlDom, parentNode, nodeName):
    322 """ 323 Adds a container node as the next child of a parent node. 324 325 @param xmlDom: DOM tree as from C{impl.createDocument()}. 326 @param parentNode: Parent node to create child for. 327 @param nodeName: Name of the new container node. 328 329 @return: Reference to the newly-created node. 330 """ 331 containerNode = xmlDom.createElement(nodeName) 332 parentNode.appendChild(containerNode) 333 return containerNode
    334
    335 -def addStringNode(xmlDom, parentNode, nodeName, nodeValue):
    336 """ 337 Adds a text node as the next child of a parent, to contain a string. 338 339 If the C{nodeValue} is None, then the node will be created, but will be 340 empty (i.e. will contain no text node child). 341 342 @param xmlDom: DOM tree as from C{impl.createDocument()}. 343 @param parentNode: Parent node to create child for. 344 @param nodeName: Name of the new container node. 345 @param nodeValue: The value to put into the node. 346 347 @return: Reference to the newly-created node. 348 """ 349 containerNode = addContainerNode(xmlDom, parentNode, nodeName) 350 if nodeValue is not None: 351 textNode = xmlDom.createTextNode(nodeValue) 352 containerNode.appendChild(textNode) 353 return containerNode
    354
    355 -def addIntegerNode(xmlDom, parentNode, nodeName, nodeValue):
    356 """ 357 Adds a text node as the next child of a parent, to contain an integer. 358 359 If the C{nodeValue} is None, then the node will be created, but will be 360 empty (i.e. will contain no text node child). 361 362 The integer will be converted to a string using "%d". The result will be 363 added to the document via L{addStringNode}. 364 365 @param xmlDom: DOM tree as from C{impl.createDocument()}. 366 @param parentNode: Parent node to create child for. 367 @param nodeName: Name of the new container node. 368 @param nodeValue: The value to put into the node. 369 370 @return: Reference to the newly-created node. 371 """ 372 if nodeValue is None: 373 return addStringNode(xmlDom, parentNode, nodeName, None) 374 else: 375 return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long
    376
    377 -def addLongNode(xmlDom, parentNode, nodeName, nodeValue):
    378 """ 379 Adds a text node as the next child of a parent, to contain a long integer. 380 381 If the C{nodeValue} is None, then the node will be created, but will be 382 empty (i.e. will contain no text node child). 383 384 The integer will be converted to a string using "%d". The result will be 385 added to the document via L{addStringNode}. 386 387 @param xmlDom: DOM tree as from C{impl.createDocument()}. 388 @param parentNode: Parent node to create child for. 389 @param nodeName: Name of the new container node. 390 @param nodeValue: The value to put into the node. 391 392 @return: Reference to the newly-created node. 393 """ 394 if nodeValue is None: 395 return addStringNode(xmlDom, parentNode, nodeName, None) 396 else: 397 return addStringNode(xmlDom, parentNode, nodeName, "%d" % nodeValue) # %d works for both int and long
    398
    399 -def addBooleanNode(xmlDom, parentNode, nodeName, nodeValue):
    400 """ 401 Adds a text node as the next child of a parent, to contain a boolean. 402 403 If the C{nodeValue} is None, then the node will be created, but will be 404 empty (i.e. will contain no text node child). 405 406 Boolean C{True}, or anything else interpreted as C{True} by Python, will 407 be converted to a string "Y". Anything else will be converted to a 408 string "N". The result is added to the document via L{addStringNode}. 409 410 @param xmlDom: DOM tree as from C{impl.createDocument()}. 411 @param parentNode: Parent node to create child for. 412 @param nodeName: Name of the new container node. 413 @param nodeValue: The value to put into the node. 414 415 @return: Reference to the newly-created node. 416 """ 417 if nodeValue is None: 418 return addStringNode(xmlDom, parentNode, nodeName, None) 419 else: 420 if nodeValue: 421 return addStringNode(xmlDom, parentNode, nodeName, "Y") 422 else: 423 return addStringNode(xmlDom, parentNode, nodeName, "N")
    424 425 426 ######################################################################## 427 # Functions for serializing DOM trees 428 ######################################################################## 429
    430 -def serializeDom(xmlDom, indent=3):
    431 """ 432 Serializes a DOM tree and returns the result in a string. 433 @param xmlDom: XML DOM tree to serialize 434 @param indent: Number of spaces to indent, as an integer 435 @return: String form of DOM tree, pretty-printed. 436 """ 437 xmlBuffer = StringIO() 438 serializer = Serializer(xmlBuffer, "UTF-8", indent=indent) 439 serializer.serialize(xmlDom) 440 xmlData = xmlBuffer.getvalue() 441 xmlBuffer.close() 442 return xmlData
    443
    444 -class Serializer(object):
    445 446 """ 447 XML serializer class. 448 449 This is a customized serializer that I hacked together based on what I found 450 in the PyXML distribution. Basically, around release 2.7.0, the only reason 451 I still had around a dependency on PyXML was for the PrettyPrint 452 functionality, and that seemed pointless. So, I stripped the PrettyPrint 453 code out of PyXML and hacked bits of it off until it did just what I needed 454 and no more. 455 456 This code started out being called PrintVisitor, but I decided it makes more 457 sense just calling it a serializer. I've made nearly all of the methods 458 private, and I've added a new high-level serialize() method rather than 459 having clients call C{visit()}. 460 461 Anyway, as a consequence of my hacking with it, this can't quite be called a 462 complete XML serializer any more. I ripped out support for HTML and XHTML, 463 and there is also no longer any support for namespaces (which I took out 464 because this dragged along a lot of extra code, and Cedar Backup doesn't use 465 namespaces). However, everything else should pretty much work as expected. 466 467 @copyright: This code, prior to customization, was part of the PyXML 468 codebase, and before that was part of the 4DOM suite developed by 469 Fourthought, Inc. It its original form, it was Copyright (c) 2000 470 Fourthought Inc, USA; All Rights Reserved. 471 """ 472
    473 - def __init__(self, stream=sys.stdout, encoding="UTF-8", indent=3):
    474 """ 475 Initialize a serializer. 476 @param stream: Stream to write output to. 477 @param encoding: Output encoding. 478 @param indent: Number of spaces to indent, as an integer 479 """ 480 self.stream = stream 481 self.encoding = encoding 482 self._indent = indent * " " 483 self._depth = 0 484 self._inText = 0
    485
    486 - def serialize(self, xmlDom):
    487 """ 488 Serialize the passed-in XML document. 489 @param xmlDom: XML DOM tree to serialize 490 @raise ValueError: If there's an unknown node type in the document. 491 """ 492 self._visit(xmlDom) 493 self.stream.write("\n")
    494
    495 - def _write(self, text):
    496 obj = _encodeText(text, self.encoding) 497 self.stream.write(obj) 498 return
    499
    500 - def _tryIndent(self):
    501 if not self._inText and self._indent: 502 self._write('\n' + self._indent*self._depth) 503 return
    504
    505 - def _visit(self, node):
    506 """ 507 @raise ValueError: If there's an unknown node type in the document. 508 """ 509 if node.nodeType == Node.ELEMENT_NODE: 510 return self._visitElement(node) 511 512 elif node.nodeType == Node.ATTRIBUTE_NODE: 513 return self._visitAttr(node) 514 515 elif node.nodeType == Node.TEXT_NODE: 516 return self._visitText(node) 517 518 elif node.nodeType == Node.CDATA_SECTION_NODE: 519 return self._visitCDATASection(node) 520 521 elif node.nodeType == Node.ENTITY_REFERENCE_NODE: 522 return self._visitEntityReference(node) 523 524 elif node.nodeType == Node.ENTITY_NODE: 525 return self._visitEntity(node) 526 527 elif node.nodeType == Node.PROCESSING_INSTRUCTION_NODE: 528 return self._visitProcessingInstruction(node) 529 530 elif node.nodeType == Node.COMMENT_NODE: 531 return self._visitComment(node) 532 533 elif node.nodeType == Node.DOCUMENT_NODE: 534 return self._visitDocument(node) 535 536 elif node.nodeType == Node.DOCUMENT_TYPE_NODE: 537 return self._visitDocumentType(node) 538 539 elif node.nodeType == Node.DOCUMENT_FRAGMENT_NODE: 540 return self._visitDocumentFragment(node) 541 542 elif node.nodeType == Node.NOTATION_NODE: 543 return self._visitNotation(node) 544 545 # It has a node type, but we don't know how to handle it 546 raise ValueError("Unknown node type: %s" % repr(node))
    547
    548 - def _visitNodeList(self, node, exclude=None):
    549 for curr in node: 550 curr is not exclude and self._visit(curr) 551 return
    552
    553 - def _visitNamedNodeMap(self, node):
    554 for item in node.values(): 555 self._visit(item) 556 return
    557
    558 - def _visitAttr(self, node):
    559 self._write(' ' + node.name) 560 value = node.value 561 text = _translateCDATA(value, self.encoding) 562 text, delimiter = _translateCDATAAttr(text) 563 self.stream.write("=%s%s%s" % (delimiter, text, delimiter)) 564 return
    565
    566 - def _visitProlog(self):
    567 self._write("<?xml version='1.0' encoding='%s'?>" % (self.encoding or 'utf-8')) 568 self._inText = 0 569 return
    570
    571 - def _visitDocument(self, node):
    572 self._visitProlog() 573 node.doctype and self._visitDocumentType(node.doctype) 574 self._visitNodeList(node.childNodes, exclude=node.doctype) 575 return
    576
    577 - def _visitDocumentFragment(self, node):
    578 self._visitNodeList(node.childNodes) 579 return
    580
    581 - def _visitElement(self, node):
    582 self._tryIndent() 583 self._write('<%s' % node.tagName) 584 for attr in node.attributes.values(): 585 self._visitAttr(attr) 586 if len(node.childNodes): 587 self._write('>') 588 self._depth = self._depth + 1 589 self._visitNodeList(node.childNodes) 590 self._depth = self._depth - 1 591 not (self._inText) and self._tryIndent() 592 self._write('</%s>' % node.tagName) 593 else: 594 self._write('/>') 595 self._inText = 0 596 return
    597
    598 - def _visitText(self, node):
    599 text = node.data 600 if self._indent: 601 text.strip() 602 if text: 603 text = _translateCDATA(text, self.encoding) 604 self.stream.write(text) 605 self._inText = 1 606 return
    607
    608 - def _visitDocumentType(self, doctype):
    609 if not doctype.systemId and not doctype.publicId: return 610 self._tryIndent() 611 self._write('<!DOCTYPE %s' % doctype.name) 612 if doctype.systemId and '"' in doctype.systemId: 613 system = "'%s'" % doctype.systemId 614 else: 615 system = '"%s"' % doctype.systemId 616 if doctype.publicId and '"' in doctype.publicId: 617 # We should probably throw an error 618 # Valid characters: <space> | <newline> | <linefeed> | 619 # [a-zA-Z0-9] | [-'()+,./:=?;!*#@$_%] 620 public = "'%s'" % doctype.publicId 621 else: 622 public = '"%s"' % doctype.publicId 623 if doctype.publicId and doctype.systemId: 624 self._write(' PUBLIC %s %s' % (public, system)) 625 elif doctype.systemId: 626 self._write(' SYSTEM %s' % system) 627 if doctype.entities or doctype.notations: 628 self._write(' [') 629 self._depth = self._depth + 1 630 self._visitNamedNodeMap(doctype.entities) 631 self._visitNamedNodeMap(doctype.notations) 632 self._depth = self._depth - 1 633 self._tryIndent() 634 self._write(']>') 635 else: 636 self._write('>') 637 self._inText = 0 638 return
    639
    640 - def _visitEntity(self, node):
    641 """Visited from a NamedNodeMap in DocumentType""" 642 self._tryIndent() 643 self._write('<!ENTITY %s' % (node.nodeName)) 644 node.publicId and self._write(' PUBLIC %s' % node.publicId) 645 node.systemId and self._write(' SYSTEM %s' % node.systemId) 646 node.notationName and self._write(' NDATA %s' % node.notationName) 647 self._write('>') 648 return
    649
    650 - def _visitNotation(self, node):
    651 """Visited from a NamedNodeMap in DocumentType""" 652 self._tryIndent() 653 self._write('<!NOTATION %s' % node.nodeName) 654 node.publicId and self._write(' PUBLIC %s' % node.publicId) 655 node.systemId and self._write(' SYSTEM %s' % node.systemId) 656 self._write('>') 657 return
    658
    659 - def _visitCDATASection(self, node):
    660 self._tryIndent() 661 self._write('<![CDATA[%s]]>' % (node.data)) 662 self._inText = 0 663 return
    664
    665 - def _visitComment(self, node):
    666 self._tryIndent() 667 self._write('<!--%s-->' % (node.data)) 668 self._inText = 0 669 return
    670
    671 - def _visitEntityReference(self, node):
    672 self._write('&%s;' % node.nodeName) 673 self._inText = 1 674 return
    675
    676 - def _visitProcessingInstruction(self, node):
    677 self._tryIndent() 678 self._write('<?%s %s?>' % (node.target, node.data)) 679 self._inText = 0 680 return
    681
    682 -def _encodeText(text, encoding):
    683 """ 684 @copyright: This code, prior to customization, was part of the PyXML 685 codebase, and before that was part of the 4DOM suite developed by 686 Fourthought, Inc. It its original form, it was attributed to Martin v. 687 Löwis and was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved. 688 """ 689 encoder = codecs.lookup(encoding)[0] # encode,decode,reader,writer 690 if not isinstance(text, UnicodeType): 691 text = unicode(text, "utf-8") 692 return encoder(text)[0] # result,size
    693
    694 -def _translateCDATAAttr(characters):
    695 """ 696 Handles normalization and some intelligence about quoting. 697 698 @copyright: This code, prior to customization, was part of the PyXML 699 codebase, and before that was part of the 4DOM suite developed by 700 Fourthought, Inc. It its original form, it was Copyright (c) 2000 701 Fourthought Inc, USA; All Rights Reserved. 702 """ 703 if not characters: 704 return '', "'" 705 if "'" in characters: 706 delimiter = '"' 707 new_chars = re.sub('"', '&quot;', characters) 708 else: 709 delimiter = "'" 710 new_chars = re.sub("'", '&apos;', characters) 711 #FIXME: There's more to normalization 712 #Convert attribute new-lines to character entity 713 # characters is possibly shorter than new_chars (no entities) 714 if "\n" in characters: 715 new_chars = re.sub('\n', '&#10;', new_chars) 716 return new_chars, delimiter
    717 718 #Note: Unicode object only for now
    719 -def _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0):
    720 """ 721 @copyright: This code, prior to customization, was part of the PyXML 722 codebase, and before that was part of the 4DOM suite developed by 723 Fourthought, Inc. It its original form, it was Copyright (c) 2000 724 Fourthought Inc, USA; All Rights Reserved. 725 """ 726 CDATA_CHAR_PATTERN = re.compile('[&<]|]]>') 727 CHAR_TO_ENTITY = { '&': '&amp;', '<': '&lt;', ']]>': ']]&gt;', } 728 ILLEGAL_LOW_CHARS = '[\x01-\x08\x0B-\x0C\x0E-\x1F]' 729 ILLEGAL_HIGH_CHARS = '\xEF\xBF[\xBE\xBF]' 730 XML_ILLEGAL_CHAR_PATTERN = re.compile('%s|%s'%(ILLEGAL_LOW_CHARS, ILLEGAL_HIGH_CHARS)) 731 if not characters: 732 return '' 733 if not markupSafe: 734 if CDATA_CHAR_PATTERN.search(characters): 735 new_string = CDATA_CHAR_PATTERN.subn(lambda m, d=CHAR_TO_ENTITY: d[m.group()], characters)[0] 736 else: 737 new_string = characters 738 if prev_chars[-2:] == ']]' and characters[0] == '>': 739 new_string = '&gt;' + new_string[1:] 740 else: 741 new_string = characters 742 #Note: use decimal char entity rep because some browsers are broken 743 #FIXME: This will bomb for high characters. Should, for instance, detect 744 #The UTF-8 for 0xFFFE and put out &#xFFFE; 745 if XML_ILLEGAL_CHAR_PATTERN.search(new_string): 746 new_string = XML_ILLEGAL_CHAR_PATTERN.subn(lambda m: '&#%i;' % ord(m.group()), new_string)[0] 747 new_string = _encodeText(new_string, encoding) 748 return new_string
    749

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.testutil-pysrc.html0000664000175000017500000040343512642035647026433 0ustar pronovicpronovic00000000000000 CedarBackup2.testutil
    Package CedarBackup2 :: Module testutil
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.testutil

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2006,2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Provides unit-testing utilities. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides unit-testing utilities. 
     40   
     41  These utilities are kept here, separate from util.py, because they provide 
     42  common functionality that I do not want exported "publicly" once Cedar Backup 
     43  is installed on a system.  They are only used for unit testing, and are only 
     44  useful within the source tree. 
     45   
     46  Many of these functions are in here because they are "good enough" for unit 
     47  test work but are not robust enough to be real public functions.  Others (like 
     48  L{removedir}) do what they are supposed to, but I don't want responsibility for 
     49  making them available to others. 
     50   
     51  @sort: findResources, commandAvailable, 
     52         buildPath, removedir, extractTar, changeFileAge, 
     53         getMaskAsMode, getLogin, failUnlessAssignRaises, runningAsRoot, 
     54         platformDebian, platformMacOsX, platformCygwin, platformWindows, 
     55         platformHasEcho, platformSupportsLinks, platformSupportsPermissions, 
     56         platformRequiresBinaryRead 
     57   
     58  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     59  """ 
     60   
     61   
     62  ######################################################################## 
     63  # Imported modules 
     64  ######################################################################## 
     65   
     66  import sys 
     67  import os 
     68  import tarfile 
     69  import time 
     70  import getpass 
     71  import random 
     72  import string # pylint: disable=W0402 
     73  import platform 
     74  import logging 
     75  from StringIO import StringIO 
     76   
     77  from CedarBackup2.util import encodePath, executeCommand 
     78  from CedarBackup2.config import Config, OptionsConfig 
     79  from CedarBackup2.customize import customizeOverrides 
     80  from CedarBackup2.cli import setupPathResolver 
     81   
     82   
     83  ######################################################################## 
     84  # Public functions 
     85  ######################################################################## 
     86   
     87  ############################## 
     88  # setupDebugLogger() function 
     89  ############################## 
     90   
    
    91 -def setupDebugLogger():
    92 """ 93 Sets up a screen logger for debugging purposes. 94 95 Normally, the CLI functionality configures the logger so that 96 things get written to the right place. However, for debugging 97 it's sometimes nice to just get everything -- debug information 98 and output -- dumped to the screen. This function takes care 99 of that. 100 """ 101 logger = logging.getLogger("CedarBackup2") 102 logger.setLevel(logging.DEBUG) # let the logger see all messages 103 formatter = logging.Formatter(fmt="%(message)s") 104 handler = logging.StreamHandler(stream=sys.stdout) 105 handler.setFormatter(formatter) 106 handler.setLevel(logging.DEBUG) 107 logger.addHandler(handler)
    108 109 110 ################# 111 # setupOverrides 112 ################# 113
    114 -def setupOverrides():
    115 """ 116 Set up any platform-specific overrides that might be required. 117 118 When packages are built, this is done manually (hardcoded) in customize.py 119 and the overrides are set up in cli.cli(). This way, no runtime checks need 120 to be done. This is safe, because the package maintainer knows exactly 121 which platform (Debian or not) the package is being built for. 122 123 Unit tests are different, because they might be run anywhere. So, we 124 attempt to make a guess about plaform using platformDebian(), and use that 125 to set up the custom overrides so that platform-specific unit tests continue 126 to work. 127 """ 128 config = Config() 129 config.options = OptionsConfig() 130 if platformDebian(): 131 customizeOverrides(config, platform="debian") 132 else: 133 customizeOverrides(config, platform="standard") 134 setupPathResolver(config)
    135 136 137 ########################### 138 # findResources() function 139 ########################### 140
    141 -def findResources(resources, dataDirs):
    142 """ 143 Returns a dictionary of locations for various resources. 144 @param resources: List of required resources. 145 @param dataDirs: List of data directories to search within for resources. 146 @return: Dictionary mapping resource name to resource path. 147 @raise Exception: If some resource cannot be found. 148 """ 149 mapping = { } 150 for resource in resources: 151 for resourceDir in dataDirs: 152 path = os.path.join(resourceDir, resource) 153 if os.path.exists(path): 154 mapping[resource] = path 155 break 156 else: 157 raise Exception("Unable to find resource [%s]." % resource) 158 return mapping
    159 160 161 ############################## 162 # commandAvailable() function 163 ############################## 164
    165 -def commandAvailable(command):
    166 """ 167 Indicates whether a command is available on $PATH somewhere. 168 This should work on both Windows and UNIX platforms. 169 @param command: Commang to search for 170 @return: Boolean true/false depending on whether command is available. 171 """ 172 if os.environ.has_key("PATH"): 173 for path in os.environ["PATH"].split(os.sep): 174 if os.path.exists(os.path.join(path, command)): 175 return True 176 return False
    177 178 179 ####################### 180 # buildPath() function 181 ####################### 182
    183 -def buildPath(components):
    184 """ 185 Builds a complete path from a list of components. 186 For instance, constructs C{"/a/b/c"} from C{["/a", "b", "c",]}. 187 @param components: List of components. 188 @returns: String path constructed from components. 189 @raise ValueError: If a path cannot be encoded properly. 190 """ 191 path = components[0] 192 for component in components[1:]: 193 path = os.path.join(path, component) 194 return encodePath(path)
    195 196 197 ####################### 198 # removedir() function 199 ####################### 200
    201 -def removedir(tree):
    202 """ 203 Recursively removes an entire directory. 204 This is basically taken from an example on python.com. 205 @param tree: Directory tree to remove. 206 @raise ValueError: If a path cannot be encoded properly. 207 """ 208 tree = encodePath(tree) 209 for root, dirs, files in os.walk(tree, topdown=False): 210 for name in files: 211 path = os.path.join(root, name) 212 if os.path.islink(path): 213 os.remove(path) 214 elif os.path.isfile(path): 215 os.remove(path) 216 for name in dirs: 217 path = os.path.join(root, name) 218 if os.path.islink(path): 219 os.remove(path) 220 elif os.path.isdir(path): 221 os.rmdir(path) 222 os.rmdir(tree)
    223 224 225 ######################## 226 # extractTar() function 227 ######################## 228
    229 -def extractTar(tmpdir, filepath):
    230 """ 231 Extracts the indicated tar file to the indicated tmpdir. 232 @param tmpdir: Temp directory to extract to. 233 @param filepath: Path to tarfile to extract. 234 @raise ValueError: If a path cannot be encoded properly. 235 """ 236 # pylint: disable=E1101 237 tmpdir = encodePath(tmpdir) 238 filepath = encodePath(filepath) 239 tar = tarfile.open(filepath) 240 try: 241 tar.format = tarfile.GNU_FORMAT 242 except AttributeError: 243 tar.posix = False 244 for tarinfo in tar: 245 tar.extract(tarinfo, tmpdir)
    246 247 248 ########################### 249 # changeFileAge() function 250 ########################### 251
    252 -def changeFileAge(filename, subtract=None):
    253 """ 254 Changes a file age using the C{os.utime} function. 255 256 @note: Some platforms don't seem to be able to set an age precisely. As a 257 result, whereas we might have intended to set an age of 86400 seconds, we 258 actually get an age of 86399.375 seconds. When util.calculateFileAge() 259 looks at that the file, it calculates an age of 0.999992766204 days, which 260 then gets truncated down to zero whole days. The tests get very confused. 261 To work around this, I always subtract off one additional second as a fudge 262 factor. That way, the file age will be I{at least} as old as requested 263 later on. 264 265 @param filename: File to operate on. 266 @param subtract: Number of seconds to subtract from the current time. 267 @raise ValueError: If a path cannot be encoded properly. 268 """ 269 filename = encodePath(filename) 270 newTime = time.time() - 1 271 if subtract is not None: 272 newTime -= subtract 273 os.utime(filename, (newTime, newTime))
    274 275 276 ########################### 277 # getMaskAsMode() function 278 ########################### 279
    280 -def getMaskAsMode():
    281 """ 282 Returns the user's current umask inverted to a mode. 283 A mode is mostly a bitwise inversion of a mask, i.e. mask 002 is mode 775. 284 @return: Umask converted to a mode, as an integer. 285 """ 286 umask = os.umask(0777) 287 os.umask(umask) 288 return int(~umask & 0777) # invert, then use only lower bytes
    289 290 291 ###################### 292 # getLogin() function 293 ###################### 294
    295 -def getLogin():
    296 """ 297 Returns the name of the currently-logged in user. This might fail under 298 some circumstances - but if it does, our tests would fail anyway. 299 """ 300 return getpass.getuser()
    301 302 303 ############################ 304 # randomFilename() function 305 ############################ 306
    307 -def randomFilename(length, prefix=None, suffix=None):
    308 """ 309 Generates a random filename with the given length. 310 @param length: Length of filename. 311 @return Random filename. 312 """ 313 characters = [None] * length 314 for i in xrange(length): 315 characters[i] = random.choice(string.ascii_uppercase) 316 if prefix is None: 317 prefix = "" 318 if suffix is None: 319 suffix = "" 320 return "%s%s%s" % (prefix, "".join(characters), suffix)
    321 322 323 #################################### 324 # failUnlessAssignRaises() function 325 #################################### 326
    327 -def failUnlessAssignRaises(testCase, exception, obj, prop, value):
    328 """ 329 Equivalent of C{failUnlessRaises}, but used for property assignments instead. 330 331 It's nice to be able to use C{failUnlessRaises} to check that a method call 332 raises the exception that you expect. Unfortunately, this method can't be 333 used to check Python propery assignments, even though these property 334 assignments are actually implemented underneath as methods. 335 336 This function (which can be easily called by unit test classes) provides an 337 easy way to wrap the assignment checks. It's not pretty, or as intuitive as 338 the original check it's modeled on, but it does work. 339 340 Let's assume you make this method call:: 341 342 testCase.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", absolutePath) 343 344 If you do this, a test case failure will be raised unless the assignment:: 345 346 collectDir.absolutePath = absolutePath 347 348 fails with a C{ValueError} exception. The failure message differentiates 349 between the case where no exception was raised and the case where the wrong 350 exception was raised. 351 352 @note: Internally, the C{missed} and C{instead} variables are used rather 353 than directly calling C{testCase.fail} upon noticing a problem because the 354 act of "failure" itself generates an exception that would be caught by the 355 general C{except} clause. 356 357 @param testCase: PyUnit test case object (i.e. self). 358 @param exception: Exception that is expected to be raised. 359 @param obj: Object whose property is to be assigned to. 360 @param prop: Name of the property, as a string. 361 @param value: Value that is to be assigned to the property. 362 363 @see: C{unittest.TestCase.failUnlessRaises} 364 """ 365 missed = False 366 instead = None 367 try: 368 exec "obj.%s = value" % prop # pylint: disable=W0122 369 missed = True 370 except exception: pass 371 except Exception, e: 372 instead = e 373 if missed: 374 testCase.fail("Expected assignment to raise %s, but got no exception." % (exception.__name__)) 375 if instead is not None: 376 testCase.fail("Expected assignment to raise %s, but got %s instead." % (ValueError, instead.__class__.__name__))
    377 378 379 ########################### 380 # captureOutput() function 381 ########################### 382
    383 -def captureOutput(c):
    384 """ 385 Captures the output (stdout, stderr) of a function or a method. 386 387 Some of our functions don't do anything other than just print output. We 388 need a way to test these functions (at least nominally) but we don't want 389 any of the output spoiling the test suite output. 390 391 This function just creates a dummy file descriptor that can be used as a 392 target by the callable function, rather than C{stdout} or C{stderr}. 393 394 @note: This method assumes that C{callable} doesn't take any arguments 395 besides keyword argument C{fd} to specify the file descriptor. 396 397 @param c: Callable function or method. 398 399 @return: Output of function, as one big string. 400 """ 401 fd = StringIO() 402 c(fd=fd) 403 result = fd.getvalue() 404 fd.close() 405 return result
    406 407 408 ######################### 409 # _isPlatform() function 410 ######################### 411
    412 -def _isPlatform(name):
    413 """ 414 Returns boolean indicating whether we're running on the indicated platform. 415 @param name: Platform name to check, currently one of "windows" or "macosx" 416 """ 417 if name == "windows": 418 return platform.platform(True, True).startswith("Windows") 419 elif name == "macosx": 420 return sys.platform == "darwin" 421 elif name == "debian": 422 return platform.platform(False, False).find("debian") > 0 423 elif name == "cygwin": 424 return platform.platform(True, True).startswith("CYGWIN") 425 else: 426 raise ValueError("Unknown platform [%s]." % name)
    427 428 429 ############################ 430 # platformDebian() function 431 ############################ 432
    433 -def platformDebian():
    434 """ 435 Returns boolean indicating whether this is the Debian platform. 436 """ 437 return _isPlatform("debian")
    438 439 440 ############################ 441 # platformMacOsX() function 442 ############################ 443
    444 -def platformMacOsX():
    445 """ 446 Returns boolean indicating whether this is the Mac OS X platform. 447 """ 448 return _isPlatform("macosx")
    449 450 451 ############################# 452 # platformWindows() function 453 ############################# 454
    455 -def platformWindows():
    456 """ 457 Returns boolean indicating whether this is the Windows platform. 458 """ 459 return _isPlatform("windows")
    460 461 462 ############################ 463 # platformCygwin() function 464 ############################ 465
    466 -def platformCygwin():
    467 """ 468 Returns boolean indicating whether this is the Cygwin platform. 469 """ 470 return _isPlatform("cygwin")
    471 472 473 ################################### 474 # platformSupportsLinks() function 475 ################################### 476 484 485 486 ######################################### 487 # platformSupportsPermissions() function 488 ######################################### 489
    491 """ 492 Returns boolean indicating whether the platform supports UNIX-style file permissions. 493 Some platforms, like Windows, do not support permissions, and tests need to take 494 this into account. 495 """ 496 return not platformWindows()
    497 498 499 ######################################## 500 # platformRequiresBinaryRead() function 501 ######################################## 502
    504 """ 505 Returns boolean indicating whether the platform requires binary reads. 506 Some platforms, like Windows, require a special flag to read binary data 507 from files. 508 """ 509 return platformWindows()
    510 511 512 ############################# 513 # platformHasEcho() function 514 ############################# 515
    516 -def platformHasEcho():
    517 """ 518 Returns boolean indicating whether the platform has a sensible echo command. 519 On some platforms, like Windows, echo doesn't really work for tests. 520 """ 521 return not platformWindows()
    522 523 524 ########################### 525 # runningAsRoot() function 526 ########################### 527
    528 -def runningAsRoot():
    529 """ 530 Returns boolean indicating whether the effective user id is root. 531 This is always true on platforms that have no concept of root, like Windows. 532 """ 533 if platformWindows(): 534 return True 535 else: 536 return os.geteuid() == 0
    537 538 539 ############################## 540 # availableLocales() function 541 ############################## 542
    543 -def availableLocales():
    544 """ 545 Returns a list of available locales on the system 546 @return: List of string locale names 547 """ 548 locales = [] 549 output = executeCommand(["locale"], [ "-a", ], returnOutput=True, ignoreStderr=True)[1] 550 for line in output: 551 locales.append(line.rstrip()) 552 return locales
    553 554 555 #################################### 556 # hexFloatLiteralAllowed() function 557 #################################### 558
    560 """ 561 Indicates whether hex float literals are allowed by the interpreter. 562 563 As far back as 2004, some Python documentation indicated that octal and hex 564 notation applied only to integer literals. However, prior to Python 2.5, it 565 was legal to construct a float with an argument like 0xAC on some platforms. 566 This check provides a an indication of whether the current interpreter 567 supports that behavior. 568 569 This check exists so that unit tests can continue to test the same thing as 570 always for pre-2.5 interpreters (i.e. making sure backwards compatibility 571 doesn't break) while still continuing to work for later interpreters. 572 573 The returned value is True if hex float literals are allowed, False otherwise. 574 """ 575 if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 5] and not platformWindows(): 576 return True 577 return False
    578

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.cdwriter-pysrc.html0000664000175000017500000145336412642035645030104 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter
    Package CedarBackup2 :: Package writers :: Module cdwriter
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.writers.cdwriter

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 2 (>= 2.7) 
      29  # Project  : Cedar Backup, release 2 
      30  # Purpose  : Provides functionality related to CD writer devices. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides functionality related to CD writer devices. 
      40   
      41  @sort: MediaDefinition, MediaCapacity, CdWriter, 
      42         MEDIA_CDRW_74, MEDIA_CDR_74, MEDIA_CDRW_80, MEDIA_CDR_80 
      43   
      44  @var MEDIA_CDRW_74: Constant representing 74-minute CD-RW media. 
      45  @var MEDIA_CDR_74: Constant representing 74-minute CD-R media. 
      46  @var MEDIA_CDRW_80: Constant representing 80-minute CD-RW media. 
      47  @var MEDIA_CDR_80: Constant representing 80-minute CD-R media. 
      48   
      49  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      50  """ 
      51   
      52  ######################################################################## 
      53  # Imported modules 
      54  ######################################################################## 
      55   
      56  # System modules 
      57  import os 
      58  import re 
      59  import logging 
      60  import tempfile 
      61  import time 
      62   
      63  # Cedar Backup modules 
      64  from CedarBackup2.util import resolveCommand, executeCommand 
      65  from CedarBackup2.util import convertSize, displayBytes, encodePath 
      66  from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES 
      67  from CedarBackup2.writers.util import validateDevice, validateScsiId, validateDriveSpeed 
      68  from CedarBackup2.writers.util import IsoImage 
      69   
      70   
      71  ######################################################################## 
      72  # Module-wide constants and variables 
      73  ######################################################################## 
      74   
      75  logger = logging.getLogger("CedarBackup2.log.writers.cdwriter") 
      76   
      77  MEDIA_CDRW_74  = 1 
      78  MEDIA_CDR_74   = 2 
      79  MEDIA_CDRW_80  = 3 
      80  MEDIA_CDR_80   = 4 
      81   
      82  CDRECORD_COMMAND = [ "cdrecord", ] 
      83  EJECT_COMMAND    = [ "eject", ] 
      84  MKISOFS_COMMAND  = [ "mkisofs", ] 
    
    85 86 87 ######################################################################## 88 # MediaDefinition class definition 89 ######################################################################## 90 91 -class MediaDefinition(object):
    92 93 """ 94 Class encapsulating information about CD media definitions. 95 96 The following media types are accepted: 97 98 - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) 99 - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) 100 - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) 101 - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) 102 103 Note that all of the capacities associated with a media definition are in 104 terms of ISO sectors (C{util.ISO_SECTOR_SIZE)}. 105 106 @sort: __init__, mediaType, rewritable, initialLeadIn, leadIn, capacity 107 """ 108
    109 - def __init__(self, mediaType):
    110 """ 111 Creates a media definition for the indicated media type. 112 @param mediaType: Type of the media, as discussed above. 113 @raise ValueError: If the media type is unknown or unsupported. 114 """ 115 self._mediaType = None 116 self._rewritable = False 117 self._initialLeadIn = 0. 118 self._leadIn = 0.0 119 self._capacity = 0.0 120 self._setValues(mediaType)
    121
    122 - def _setValues(self, mediaType):
    123 """ 124 Sets values based on media type. 125 @param mediaType: Type of the media, as discussed above. 126 @raise ValueError: If the media type is unknown or unsupported. 127 """ 128 if mediaType not in [MEDIA_CDR_74, MEDIA_CDRW_74, MEDIA_CDR_80, MEDIA_CDRW_80]: 129 raise ValueError("Invalid media type %d." % mediaType) 130 self._mediaType = mediaType 131 self._initialLeadIn = 11400.0 # per cdrecord's documentation 132 self._leadIn = 6900.0 # per cdrecord's documentation 133 if self._mediaType == MEDIA_CDR_74: 134 self._rewritable = False 135 self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) 136 elif self._mediaType == MEDIA_CDRW_74: 137 self._rewritable = True 138 self._capacity = convertSize(650.0, UNIT_MBYTES, UNIT_SECTORS) 139 elif self._mediaType == MEDIA_CDR_80: 140 self._rewritable = False 141 self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS) 142 elif self._mediaType == MEDIA_CDRW_80: 143 self._rewritable = True 144 self._capacity = convertSize(700.0, UNIT_MBYTES, UNIT_SECTORS)
    145
    146 - def _getMediaType(self):
    147 """ 148 Property target used to get the media type value. 149 """ 150 return self._mediaType
    151
    152 - def _getRewritable(self):
    153 """ 154 Property target used to get the rewritable flag value. 155 """ 156 return self._rewritable
    157
    158 - def _getInitialLeadIn(self):
    159 """ 160 Property target used to get the initial lead-in value. 161 """ 162 return self._initialLeadIn
    163
    164 - def _getLeadIn(self):
    165 """ 166 Property target used to get the lead-in value. 167 """ 168 return self._leadIn
    169
    170 - def _getCapacity(self):
    171 """ 172 Property target used to get the capacity value. 173 """ 174 return self._capacity
    175 176 mediaType = property(_getMediaType, None, None, doc="Configured media type.") 177 rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") 178 initialLeadIn = property(_getInitialLeadIn, None, None, doc="Initial lead-in required for first image written to media.") 179 leadIn = property(_getLeadIn, None, None, doc="Lead-in required on successive images written to media.") 180 capacity = property(_getCapacity, None, None, doc="Total capacity of the media before any required lead-in.")
    181
    182 183 ######################################################################## 184 # MediaCapacity class definition 185 ######################################################################## 186 187 -class MediaCapacity(object):
    188 189 """ 190 Class encapsulating information about CD media capacity. 191 192 Space used includes the required media lead-in (unless the disk is unused). 193 Space available attempts to provide a picture of how many bytes are 194 available for data storage, including any required lead-in. 195 196 The boundaries value is either C{None} (if multisession discs are not 197 supported or if the disc has no boundaries) or in exactly the form provided 198 by C{cdrecord -msinfo}. It can be passed as-is to the C{IsoImage} class. 199 200 @sort: __init__, bytesUsed, bytesAvailable, boundaries, totalCapacity, utilized 201 """ 202
    203 - def __init__(self, bytesUsed, bytesAvailable, boundaries):
    204 """ 205 Initializes a capacity object. 206 @raise IndexError: If the boundaries tuple does not have enough elements. 207 @raise ValueError: If the boundaries values are not integers. 208 @raise ValueError: If the bytes used and available values are not floats. 209 """ 210 self._bytesUsed = float(bytesUsed) 211 self._bytesAvailable = float(bytesAvailable) 212 if boundaries is None: 213 self._boundaries = None 214 else: 215 self._boundaries = (int(boundaries[0]), int(boundaries[1]))
    216
    217 - def __str__(self):
    218 """ 219 Informal string representation for class instance. 220 """ 221 return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized)
    222
    223 - def _getBytesUsed(self):
    224 """ 225 Property target to get the bytes-used value. 226 """ 227 return self._bytesUsed
    228
    229 - def _getBytesAvailable(self):
    230 """ 231 Property target to get the bytes-available value. 232 """ 233 return self._bytesAvailable
    234
    235 - def _getBoundaries(self):
    236 """ 237 Property target to get the boundaries tuple. 238 """ 239 return self._boundaries
    240
    241 - def _getTotalCapacity(self):
    242 """ 243 Property target to get the total capacity (used + available). 244 """ 245 return self.bytesUsed + self.bytesAvailable
    246
    247 - def _getUtilized(self):
    248 """ 249 Property target to get the percent of capacity which is utilized. 250 """ 251 if self.bytesAvailable <= 0.0: 252 return 100.0 253 elif self.bytesUsed <= 0.0: 254 return 0.0 255 return (self.bytesUsed / self.totalCapacity) * 100.0
    256 257 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") 258 bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") 259 boundaries = property(_getBoundaries, None, None, doc="Session disc boundaries, in terms of ISO sectors.") 260 totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") 261 utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.")
    262
    263 264 ######################################################################## 265 # _ImageProperties class definition 266 ######################################################################## 267 268 -class _ImageProperties(object):
    269 """ 270 Simple value object to hold image properties for C{DvdWriter}. 271 """
    272 - def __init__(self):
    273 self.newDisc = False 274 self.tmpdir = None 275 self.mediaLabel = None 276 self.entries = None # dict mapping path to graft point
    277
    278 279 ######################################################################## 280 # CdWriter class definition 281 ######################################################################## 282 283 -class CdWriter(object):
    284 285 ###################### 286 # Class documentation 287 ###################### 288 289 """ 290 Class representing a device that knows how to write CD media. 291 292 Summary 293 ======= 294 295 This is a class representing a device that knows how to write CD media. It 296 provides common operations for the device, such as ejecting the media, 297 writing an ISO image to the media, or checking for the current media 298 capacity. It also provides a place to store device attributes, such as 299 whether the device supports writing multisession discs, etc. 300 301 This class is implemented in terms of the C{eject} and C{cdrecord} 302 programs, both of which should be available on most UN*X platforms. 303 304 Image Writer Interface 305 ====================== 306 307 The following methods make up the "image writer" interface shared 308 with other kinds of writers (such as DVD writers):: 309 310 __init__ 311 initializeImage() 312 addImageEntry() 313 writeImage() 314 setImageNewDisc() 315 retrieveCapacity() 316 getEstimatedImageSize() 317 318 Only these methods will be used by other Cedar Backup functionality 319 that expects a compatible image writer. 320 321 The media attribute is also assumed to be available. 322 323 Media Types 324 =========== 325 326 This class knows how to write to two different kinds of media, represented 327 by the following constants: 328 329 - C{MEDIA_CDR_74}: 74-minute CD-R media (650 MB capacity) 330 - C{MEDIA_CDRW_74}: 74-minute CD-RW media (650 MB capacity) 331 - C{MEDIA_CDR_80}: 80-minute CD-R media (700 MB capacity) 332 - C{MEDIA_CDRW_80}: 80-minute CD-RW media (700 MB capacity) 333 334 Most hardware can read and write both 74-minute and 80-minute CD-R and 335 CD-RW media. Some older drives may only be able to write CD-R media. 336 The difference between the two is that CD-RW media can be rewritten 337 (erased), while CD-R media cannot be. 338 339 I do not support any other configurations for a couple of reasons. The 340 first is that I've never tested any other kind of media. The second is 341 that anything other than 74 or 80 minute is apparently non-standard. 342 343 Device Attributes vs. Media Attributes 344 ====================================== 345 346 A given writer instance has two different kinds of attributes associated 347 with it, which I call device attributes and media attributes. Device 348 attributes are things which can be determined without looking at the 349 media, such as whether the drive supports writing multisession disks or 350 has a tray. Media attributes are attributes which vary depending on the 351 state of the media, such as the remaining capacity on a disc. In 352 general, device attributes are available via instance variables and are 353 constant over the life of an object, while media attributes can be 354 retrieved through method calls. 355 356 Talking to Hardware 357 =================== 358 359 This class needs to talk to CD writer hardware in two different ways: 360 through cdrecord to actually write to the media, and through the 361 filesystem to do things like open and close the tray. 362 363 Historically, CdWriter has interacted with cdrecord using the scsiId 364 attribute, and with most other utilities using the device attribute. 365 This changed somewhat in Cedar Backup 2.9.0. 366 367 When Cedar Backup was first written, the only way to interact with 368 cdrecord was by using a SCSI device id. IDE devices were mapped to 369 pseudo-SCSI devices through the kernel. Later, extended SCSI "methods" 370 arrived, and it became common to see C{ATA:1,0,0} or C{ATAPI:0,0,0} as a 371 way to address IDE hardware. By late 2006, C{ATA} and C{ATAPI} had 372 apparently been deprecated in favor of just addressing the IDE device 373 directly by name, i.e. C{/dev/cdrw}. 374 375 Because of this latest development, it no longer makes sense to require a 376 CdWriter to be created with a SCSI id -- there might not be one. So, the 377 passed-in SCSI id is now optional. Also, there is now a hardwareId 378 attribute. This attribute is filled in with either the SCSI id (if 379 provided) or the device (otherwise). The hardware id is the value that 380 will be passed to cdrecord in the C{dev=} argument. 381 382 Testing 383 ======= 384 385 It's rather difficult to test this code in an automated fashion, even if 386 you have access to a physical CD writer drive. It's even more difficult 387 to test it if you are running on some build daemon (think of a Debian 388 autobuilder) which can't be expected to have any hardware or any media 389 that you could write to. 390 391 Because of this, much of the implementation below is in terms of static 392 methods that are supposed to take defined actions based on their 393 arguments. Public methods are then implemented in terms of a series of 394 calls to simplistic static methods. This way, we can test as much as 395 possible of the functionality via testing the static methods, while 396 hoping that if the static methods are called appropriately, things will 397 work properly. It's not perfect, but it's much better than no testing at 398 all. 399 400 @sort: __init__, isRewritable, _retrieveProperties, retrieveCapacity, _getBoundaries, 401 _calculateCapacity, openTray, closeTray, refreshMedia, writeImage, 402 _blankMedia, _parsePropertiesOutput, _parseBoundariesOutput, 403 _buildOpenTrayArgs, _buildCloseTrayArgs, _buildPropertiesArgs, 404 _buildBoundariesArgs, _buildBlankArgs, _buildWriteArgs, 405 device, scsiId, hardwareId, driveSpeed, media, deviceType, deviceVendor, 406 deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject, 407 initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize 408 """ 409 410 ############## 411 # Constructor 412 ############## 413
    414 - def __init__(self, device, scsiId=None, driveSpeed=None, 415 mediaType=MEDIA_CDRW_74, noEject=False, 416 refreshMediaDelay=0, ejectDelay=0, unittest=False):
    417 """ 418 Initializes a CD writer object. 419 420 The current user must have write access to the device at the time the 421 object is instantiated, or an exception will be thrown. However, no 422 media-related validation is done, and in fact there is no need for any 423 media to be in the drive until one of the other media attribute-related 424 methods is called. 425 426 The various instance variables such as C{deviceType}, C{deviceVendor}, 427 etc. might be C{None}, if we're unable to parse this specific information 428 from the C{cdrecord} output. This information is just for reference. 429 430 The SCSI id is optional, but the device path is required. If the SCSI id 431 is passed in, then the hardware id attribute will be taken from the SCSI 432 id. Otherwise, the hardware id will be taken from the device. 433 434 If cdrecord improperly detects whether your writer device has a tray and 435 can be safely opened and closed, then pass in C{noEject=False}. This 436 will override the properties and the device will never be ejected. 437 438 @note: The C{unittest} parameter should never be set to C{True} 439 outside of Cedar Backup code. It is intended for use in unit testing 440 Cedar Backup internals and has no other sensible purpose. 441 442 @param device: Filesystem device associated with this writer. 443 @type device: Absolute path to a filesystem device, i.e. C{/dev/cdrw} 444 445 @param scsiId: SCSI id for the device (optional). 446 @type scsiId: If provided, SCSI id in the form C{[<method>:]scsibus,target,lun} 447 448 @param driveSpeed: Speed at which the drive writes. 449 @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. 450 451 @param mediaType: Type of the media that is assumed to be in the drive. 452 @type mediaType: One of the valid media type as discussed above. 453 454 @param noEject: Overrides properties to indicate that the device does not support eject. 455 @type noEject: Boolean true/false 456 457 @param refreshMediaDelay: Refresh media delay to use, if any 458 @type refreshMediaDelay: Number of seconds, an integer >= 0 459 460 @param ejectDelay: Eject delay to use, if any 461 @type ejectDelay: Number of seconds, an integer >= 0 462 463 @param unittest: Turns off certain validations, for use in unit testing. 464 @type unittest: Boolean true/false 465 466 @raise ValueError: If the device is not valid for some reason. 467 @raise ValueError: If the SCSI id is not in a valid form. 468 @raise ValueError: If the drive speed is not an integer >= 1. 469 @raise IOError: If device properties could not be read for some reason. 470 """ 471 self._image = None # optionally filled in by initializeImage() 472 self._device = validateDevice(device, unittest) 473 self._scsiId = validateScsiId(scsiId) 474 self._driveSpeed = validateDriveSpeed(driveSpeed) 475 self._media = MediaDefinition(mediaType) 476 self._noEject = noEject 477 self._refreshMediaDelay = refreshMediaDelay 478 self._ejectDelay = ejectDelay 479 if not unittest: 480 (self._deviceType, 481 self._deviceVendor, 482 self._deviceId, 483 self._deviceBufferSize, 484 self._deviceSupportsMulti, 485 self._deviceHasTray, 486 self._deviceCanEject) = self._retrieveProperties()
    487 488 489 ############# 490 # Properties 491 ############# 492
    493 - def _getDevice(self):
    494 """ 495 Property target used to get the device value. 496 """ 497 return self._device
    498
    499 - def _getScsiId(self):
    500 """ 501 Property target used to get the SCSI id value. 502 """ 503 return self._scsiId
    504
    505 - def _getHardwareId(self):
    506 """ 507 Property target used to get the hardware id value. 508 """ 509 if self._scsiId is None: 510 return self._device 511 return self._scsiId
    512
    513 - def _getDriveSpeed(self):
    514 """ 515 Property target used to get the drive speed. 516 """ 517 return self._driveSpeed
    518
    519 - def _getMedia(self):
    520 """ 521 Property target used to get the media description. 522 """ 523 return self._media
    524
    525 - def _getDeviceType(self):
    526 """ 527 Property target used to get the device type. 528 """ 529 return self._deviceType
    530
    531 - def _getDeviceVendor(self):
    532 """ 533 Property target used to get the device vendor. 534 """ 535 return self._deviceVendor
    536
    537 - def _getDeviceId(self):
    538 """ 539 Property target used to get the device id. 540 """ 541 return self._deviceId
    542
    543 - def _getDeviceBufferSize(self):
    544 """ 545 Property target used to get the device buffer size. 546 """ 547 return self._deviceBufferSize
    548
    549 - def _getDeviceSupportsMulti(self):
    550 """ 551 Property target used to get the device-support-multi flag. 552 """ 553 return self._deviceSupportsMulti
    554
    555 - def _getDeviceHasTray(self):
    556 """ 557 Property target used to get the device-has-tray flag. 558 """ 559 return self._deviceHasTray
    560
    561 - def _getDeviceCanEject(self):
    562 """ 563 Property target used to get the device-can-eject flag. 564 """ 565 return self._deviceCanEject
    566
    567 - def _getRefreshMediaDelay(self):
    568 """ 569 Property target used to get the configured refresh media delay, in seconds. 570 """ 571 return self._refreshMediaDelay
    572
    573 - def _getEjectDelay(self):
    574 """ 575 Property target used to get the configured eject delay, in seconds. 576 """ 577 return self._ejectDelay
    578 579 device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") 580 scsiId = property(_getScsiId, None, None, doc="SCSI id for the device, in the form C{[<method>:]scsibus,target,lun}.") 581 hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer, either SCSI id or device path.") 582 driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") 583 media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") 584 deviceType = property(_getDeviceType, None, None, doc="Type of the device, as returned from C{cdrecord -prcap}.") 585 deviceVendor = property(_getDeviceVendor, None, None, doc="Vendor of the device, as returned from C{cdrecord -prcap}.") 586 deviceId = property(_getDeviceId, None, None, doc="Device identification, as returned from C{cdrecord -prcap}.") 587 deviceBufferSize = property(_getDeviceBufferSize, None, None, doc="Size of the device's write buffer, in bytes.") 588 deviceSupportsMulti = property(_getDeviceSupportsMulti, None, None, doc="Indicates whether device supports multisession discs.") 589 deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") 590 deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") 591 refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") 592 ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") 593 594 595 ################################################# 596 # Methods related to device and media attributes 597 ################################################# 598
    599 - def isRewritable(self):
    600 """Indicates whether the media is rewritable per configuration.""" 601 return self._media.rewritable
    602
    603 - def _retrieveProperties(self):
    604 """ 605 Retrieves properties for a device from C{cdrecord}. 606 607 The results are returned as a tuple of the object device attributes as 608 returned from L{_parsePropertiesOutput}: C{(deviceType, deviceVendor, 609 deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, 610 deviceCanEject)}. 611 612 @return: Results tuple as described above. 613 @raise IOError: If there is a problem talking to the device. 614 """ 615 args = CdWriter._buildPropertiesArgs(self.hardwareId) 616 command = resolveCommand(CDRECORD_COMMAND) 617 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 618 if result != 0: 619 raise IOError("Error (%d) executing cdrecord command to get properties." % result) 620 return CdWriter._parsePropertiesOutput(output)
    621
    622 - def retrieveCapacity(self, entireDisc=False, useMulti=True):
    623 """ 624 Retrieves capacity for the current media in terms of a C{MediaCapacity} 625 object. 626 627 If C{entireDisc} is passed in as C{True} the capacity will be for the 628 entire disc, as if it were to be rewritten from scratch. If the drive 629 does not support writing multisession discs or if C{useMulti} is passed 630 in as C{False}, the capacity will also be as if the disc were to be 631 rewritten from scratch, but the indicated boundaries value will be 632 C{None}. The same will happen if the disc cannot be read for some 633 reason. Otherwise, the capacity (including the boundaries) will 634 represent whatever space remains on the disc to be filled by future 635 sessions. 636 637 @param entireDisc: Indicates whether to return capacity for entire disc. 638 @type entireDisc: Boolean true/false 639 640 @param useMulti: Indicates whether a multisession disc should be assumed, if possible. 641 @type useMulti: Boolean true/false 642 643 @return: C{MediaCapacity} object describing the capacity of the media. 644 @raise IOError: If the media could not be read for some reason. 645 """ 646 boundaries = self._getBoundaries(entireDisc, useMulti) 647 return CdWriter._calculateCapacity(self._media, boundaries)
    648
    649 - def _getBoundaries(self, entireDisc=False, useMulti=True):
    650 """ 651 Gets the ISO boundaries for the media. 652 653 If C{entireDisc} is passed in as C{True} the boundaries will be C{None}, 654 as if the disc were to be rewritten from scratch. If the drive does not 655 support writing multisession discs, the returned value will be C{None}. 656 The same will happen if the disc can't be read for some reason. 657 Otherwise, the returned value will be represent the boundaries of the 658 disc's current contents. 659 660 The results are returned as a tuple of (lower, upper) as needed by the 661 C{IsoImage} class. Note that these values are in terms of ISO sectors, 662 not bytes. Clients should generally consider the boundaries value 663 opaque, however. 664 665 @param entireDisc: Indicates whether to return capacity for entire disc. 666 @type entireDisc: Boolean true/false 667 668 @param useMulti: Indicates whether a multisession disc should be assumed, if possible. 669 @type useMulti: Boolean true/false 670 671 @return: Boundaries tuple or C{None}, as described above. 672 @raise IOError: If the media could not be read for some reason. 673 """ 674 if not self._deviceSupportsMulti: 675 logger.debug("Device does not support multisession discs; returning boundaries None.") 676 return None 677 elif not useMulti: 678 logger.debug("Use multisession flag is False; returning boundaries None.") 679 return None 680 elif entireDisc: 681 logger.debug("Entire disc flag is True; returning boundaries None.") 682 return None 683 else: 684 args = CdWriter._buildBoundariesArgs(self.hardwareId) 685 command = resolveCommand(CDRECORD_COMMAND) 686 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 687 if result != 0: 688 logger.debug("Error (%d) executing cdrecord command to get capacity.", result) 689 logger.warn("Unable to read disc (might not be initialized); returning boundaries of None.") 690 return None 691 boundaries = CdWriter._parseBoundariesOutput(output) 692 if boundaries is None: 693 logger.debug("Returning disc boundaries: None") 694 else: 695 logger.debug("Returning disc boundaries: (%d, %d)", boundaries[0], boundaries[1]) 696 return boundaries
    697 698 @staticmethod
    699 - def _calculateCapacity(media, boundaries):
    700 """ 701 Calculates capacity for the media in terms of boundaries. 702 703 If C{boundaries} is C{None} or the lower bound is 0 (zero), then the 704 capacity will be for the entire disc minus the initial lead in. 705 Otherwise, capacity will be as if the caller wanted to add an additional 706 session to the end of the existing data on the disc. 707 708 @param media: MediaDescription object describing the media capacity. 709 @param boundaries: Session boundaries as returned from L{_getBoundaries}. 710 711 @return: C{MediaCapacity} object describing the capacity of the media. 712 """ 713 if boundaries is None or boundaries[1] == 0: 714 logger.debug("Capacity calculations are based on a complete disc rewrite.") 715 sectorsAvailable = media.capacity - media.initialLeadIn 716 if sectorsAvailable < 0: sectorsAvailable = 0.0 717 bytesUsed = 0.0 718 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) 719 else: 720 logger.debug("Capacity calculations are based on a new ISO session.") 721 sectorsAvailable = media.capacity - boundaries[1] - media.leadIn 722 if sectorsAvailable < 0: sectorsAvailable = 0.0 723 bytesUsed = convertSize(boundaries[1], UNIT_SECTORS, UNIT_BYTES) 724 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) 725 logger.debug("Used [%s], available [%s].", displayBytes(bytesUsed), displayBytes(bytesAvailable)) 726 return MediaCapacity(bytesUsed, bytesAvailable, boundaries)
    727 728 729 ####################################################### 730 # Methods used for working with the internal ISO image 731 ####################################################### 732
    733 - def initializeImage(self, newDisc, tmpdir, mediaLabel=None):
    734 """ 735 Initializes the writer's associated ISO image. 736 737 This method initializes the C{image} instance variable so that the caller 738 can use the C{addImageEntry} method. Once entries have been added, the 739 C{writeImage} method can be called with no arguments. 740 741 @param newDisc: Indicates whether the disc should be re-initialized 742 @type newDisc: Boolean true/false. 743 744 @param tmpdir: Temporary directory to use if needed 745 @type tmpdir: String representing a directory path on disk 746 747 @param mediaLabel: Media label to be applied to the image, if any 748 @type mediaLabel: String, no more than 25 characters long 749 """ 750 self._image = _ImageProperties() 751 self._image.newDisc = newDisc 752 self._image.tmpdir = encodePath(tmpdir) 753 self._image.mediaLabel = mediaLabel 754 self._image.entries = {} # mapping from path to graft point (if any)
    755
    756 - def addImageEntry(self, path, graftPoint):
    757 """ 758 Adds a filepath entry to the writer's associated ISO image. 759 760 The contents of the filepath -- but not the path itself -- will be added 761 to the image at the indicated graft point. If you don't want to use a 762 graft point, just pass C{None}. 763 764 @note: Before calling this method, you must call L{initializeImage}. 765 766 @param path: File or directory to be added to the image 767 @type path: String representing a path on disk 768 769 @param graftPoint: Graft point to be used when adding this entry 770 @type graftPoint: String representing a graft point path, as described above 771 772 @raise ValueError: If initializeImage() was not previously called 773 """ 774 if self._image is None: 775 raise ValueError("Must call initializeImage() before using this method.") 776 if not os.path.exists(path): 777 raise ValueError("Path [%s] does not exist." % path) 778 self._image.entries[path] = graftPoint
    779
    780 - def setImageNewDisc(self, newDisc):
    781 """ 782 Resets (overrides) the newDisc flag on the internal image. 783 @param newDisc: New disc flag to set 784 @raise ValueError: If initializeImage() was not previously called 785 """ 786 if self._image is None: 787 raise ValueError("Must call initializeImage() before using this method.") 788 self._image.newDisc = newDisc
    789
    790 - def getEstimatedImageSize(self):
    791 """ 792 Gets the estimated size of the image associated with the writer. 793 @return: Estimated size of the image, in bytes. 794 @raise IOError: If there is a problem calling C{mkisofs}. 795 @raise ValueError: If initializeImage() was not previously called 796 """ 797 if self._image is None: 798 raise ValueError("Must call initializeImage() before using this method.") 799 image = IsoImage() 800 for path in self._image.entries.keys(): 801 image.addEntry(path, self._image.entries[path], override=False, contentsOnly=True) 802 return image.getEstimatedSize()
    803 804 805 ###################################### 806 # Methods which expose device actions 807 ###################################### 808
    809 - def openTray(self):
    810 """ 811 Opens the device's tray and leaves it open. 812 813 This only works if the device has a tray and supports ejecting its media. 814 We have no way to know if the tray is currently open or closed, so we 815 just send the appropriate command and hope for the best. If the device 816 does not have a tray or does not support ejecting its media, then we do 817 nothing. 818 819 If the writer was constructed with C{noEject=True}, then this is a no-op. 820 821 Starting with Debian wheezy on my backup hardware, I started seeing 822 consistent problems with the eject command. I couldn't tell whether 823 these problems were due to the device management system or to the new 824 kernel (3.2.0). Initially, I saw simple eject failures, possibly because 825 I was opening and closing the tray too quickly. I worked around that 826 behavior with the new ejectDelay flag. 827 828 Later, I sometimes ran into issues after writing an image to a disc: 829 eject would give errors like "unable to eject, last error: Inappropriate 830 ioctl for device". Various sources online (like Ubuntu bug #875543) 831 suggested that the drive was being locked somehow, and that the 832 workaround was to run 'eject -i off' to unlock it. Sure enough, that 833 fixed the problem for me, so now it's a normal error-handling strategy. 834 835 @raise IOError: If there is an error talking to the device. 836 """ 837 if not self._noEject: 838 if self._deviceHasTray and self._deviceCanEject: 839 args = CdWriter._buildOpenTrayArgs(self._device) 840 result = executeCommand(EJECT_COMMAND, args)[0] 841 if result != 0: 842 logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") 843 self.unlockTray() 844 result = executeCommand(EJECT_COMMAND, args)[0] 845 if result != 0: 846 raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) 847 logger.debug("Kludge was apparently successful.") 848 if self.ejectDelay is not None: 849 logger.debug("Per configuration, sleeping %d seconds after opening tray.", self.ejectDelay) 850 time.sleep(self.ejectDelay)
    851
    852 - def unlockTray(self):
    853 """ 854 Unlocks the device's tray. 855 @raise IOError: If there is an error talking to the device. 856 """ 857 args = CdWriter._buildUnlockTrayArgs(self._device) 858 command = resolveCommand(EJECT_COMMAND) 859 result = executeCommand(command, args)[0] 860 if result != 0: 861 raise IOError("Error (%d) executing eject command to unlock tray." % result)
    862
    863 - def closeTray(self):
    864 """ 865 Closes the device's tray. 866 867 This only works if the device has a tray and supports ejecting its media. 868 We have no way to know if the tray is currently open or closed, so we 869 just send the appropriate command and hope for the best. If the device 870 does not have a tray or does not support ejecting its media, then we do 871 nothing. 872 873 If the writer was constructed with C{noEject=True}, then this is a no-op. 874 875 @raise IOError: If there is an error talking to the device. 876 """ 877 if not self._noEject: 878 if self._deviceHasTray and self._deviceCanEject: 879 args = CdWriter._buildCloseTrayArgs(self._device) 880 command = resolveCommand(EJECT_COMMAND) 881 result = executeCommand(command, args)[0] 882 if result != 0: 883 raise IOError("Error (%d) executing eject command to close tray." % result)
    884
    885 - def refreshMedia(self):
    886 """ 887 Opens and then immediately closes the device's tray, to refresh the 888 device's idea of the media. 889 890 Sometimes, a device gets confused about the state of its media. Often, 891 all it takes to solve the problem is to eject the media and then 892 immediately reload it. (There are also configurable eject and refresh 893 media delays which can be applied, for situations where this makes a 894 difference.) 895 896 This only works if the device has a tray and supports ejecting its media. 897 We have no way to know if the tray is currently open or closed, so we 898 just send the appropriate command and hope for the best. If the device 899 does not have a tray or does not support ejecting its media, then we do 900 nothing. The configured delays still apply, though. 901 902 @raise IOError: If there is an error talking to the device. 903 """ 904 self.openTray() 905 self.closeTray() 906 self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! 907 if self.refreshMediaDelay is not None: 908 logger.debug("Per configuration, sleeping %d seconds to stabilize media state.", self.refreshMediaDelay) 909 time.sleep(self.refreshMediaDelay) 910 logger.debug("Media refresh complete; hopefully media state is stable now.")
    911
    912 - def writeImage(self, imagePath=None, newDisc=False, writeMulti=True):
    913 """ 914 Writes an ISO image to the media in the device. 915 916 If C{newDisc} is passed in as C{True}, we assume that the entire disc 917 will be overwritten, and the media will be blanked before writing it if 918 possible (i.e. if the media is rewritable). 919 920 If C{writeMulti} is passed in as C{True}, then a multisession disc will 921 be written if possible (i.e. if the drive supports writing multisession 922 discs). 923 924 if C{imagePath} is passed in as C{None}, then the existing image 925 configured with C{initializeImage} will be used. Under these 926 circumstances, the passed-in C{newDisc} flag will be ignored. 927 928 By default, we assume that the disc can be written multisession and that 929 we should append to the current contents of the disc. In any case, the 930 ISO image must be generated appropriately (i.e. must take into account 931 any existing session boundaries, etc.) 932 933 @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image 934 @type imagePath: String representing a path on disk 935 936 @param newDisc: Indicates whether the entire disc will overwritten. 937 @type newDisc: Boolean true/false. 938 939 @param writeMulti: Indicates whether a multisession disc should be written, if possible. 940 @type writeMulti: Boolean true/false 941 942 @raise ValueError: If the image path is not absolute. 943 @raise ValueError: If some path cannot be encoded properly. 944 @raise IOError: If the media could not be written to for some reason. 945 @raise ValueError: If no image is passed in and initializeImage() was not previously called 946 """ 947 if imagePath is None: 948 if self._image is None: 949 raise ValueError("Must call initializeImage() before using this method with no image path.") 950 try: 951 imagePath = self._createImage() 952 self._writeImage(imagePath, writeMulti, self._image.newDisc) 953 finally: 954 if imagePath is not None and os.path.exists(imagePath): 955 try: os.unlink(imagePath) 956 except: pass 957 else: 958 imagePath = encodePath(imagePath) 959 if not os.path.isabs(imagePath): 960 raise ValueError("Image path must be absolute.") 961 self._writeImage(imagePath, writeMulti, newDisc)
    962
    963 - def _createImage(self):
    964 """ 965 Creates an ISO image based on configuration in self._image. 966 @return: Path to the newly-created ISO image on disk. 967 @raise IOError: If there is an error writing the image to disk. 968 @raise ValueError: If there are no filesystem entries in the image 969 @raise ValueError: If a path cannot be encoded properly. 970 """ 971 path = None 972 capacity = self.retrieveCapacity(entireDisc=self._image.newDisc) 973 image = IsoImage(self.device, capacity.boundaries) 974 image.volumeId = self._image.mediaLabel # may be None, which is also valid 975 for key in self._image.entries.keys(): 976 image.addEntry(key, self._image.entries[key], override=False, contentsOnly=True) 977 size = image.getEstimatedSize() 978 logger.info("Image size will be %s.", displayBytes(size)) 979 available = capacity.bytesAvailable 980 logger.debug("Media capacity: %s", displayBytes(available)) 981 if size > available: 982 logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) 983 raise IOError("Media does not contain enough capacity to store image.") 984 try: 985 (handle, path) = tempfile.mkstemp(dir=self._image.tmpdir) 986 try: os.close(handle) 987 except: pass 988 image.writeImage(path) 989 logger.debug("Completed creating image [%s].", path) 990 return path 991 except Exception, e: 992 if path is not None and os.path.exists(path): 993 try: os.unlink(path) 994 except: pass 995 raise e
    996
    997 - def _writeImage(self, imagePath, writeMulti, newDisc):
    998 """ 999 Write an ISO image to disc using cdrecord. 1000 The disc is blanked first if C{newDisc} is C{True}. 1001 @param imagePath: Path to an ISO image on disk 1002 @param writeMulti: Indicates whether a multisession disc should be written, if possible. 1003 @param newDisc: Indicates whether the entire disc will overwritten. 1004 """ 1005 if newDisc: 1006 self._blankMedia() 1007 args = CdWriter._buildWriteArgs(self.hardwareId, imagePath, self._driveSpeed, writeMulti and self._deviceSupportsMulti) 1008 command = resolveCommand(CDRECORD_COMMAND) 1009 result = executeCommand(command, args)[0] 1010 if result != 0: 1011 raise IOError("Error (%d) executing command to write disc." % result) 1012 self.refreshMedia()
    1013
    1014 - def _blankMedia(self):
    1015 """ 1016 Blanks the media in the device, if the media is rewritable. 1017 @raise IOError: If the media could not be written to for some reason. 1018 """ 1019 if self.isRewritable(): 1020 args = CdWriter._buildBlankArgs(self.hardwareId) 1021 command = resolveCommand(CDRECORD_COMMAND) 1022 result = executeCommand(command, args)[0] 1023 if result != 0: 1024 raise IOError("Error (%d) executing command to blank disc." % result) 1025 self.refreshMedia()
    1026 1027 1028 ####################################### 1029 # Methods used to parse command output 1030 ####################################### 1031 1032 @staticmethod
    1033 - def _parsePropertiesOutput(output):
    1034 """ 1035 Parses the output from a C{cdrecord} properties command. 1036 1037 The C{output} parameter should be a list of strings as returned from 1038 C{executeCommand} for a C{cdrecord} command with arguments as from 1039 C{_buildPropertiesArgs}. The list of strings will be parsed to yield 1040 information about the properties of the device. 1041 1042 The output is expected to be a huge long list of strings. Unfortunately, 1043 the strings aren't in a completely regular format. However, the format 1044 of individual lines seems to be regular enough that we can look for 1045 specific values. Two kinds of parsing take place: one kind of parsing 1046 picks out out specific values like the device id, device vendor, etc. 1047 The other kind of parsing just sets a boolean flag C{True} if a matching 1048 line is found. All of the parsing is done with regular expressions. 1049 1050 Right now, pretty much nothing in the output is required and we should 1051 parse an empty document successfully (albeit resulting in a device that 1052 can't eject, doesn't have a tray and doesnt't support multisession 1053 discs). I had briefly considered erroring out if certain lines weren't 1054 found or couldn't be parsed, but that seems like a bad idea given that 1055 most of the information is just for reference. 1056 1057 The results are returned as a tuple of the object device attributes: 1058 C{(deviceType, deviceVendor, deviceId, deviceBufferSize, 1059 deviceSupportsMulti, deviceHasTray, deviceCanEject)}. 1060 1061 @param output: Output from a C{cdrecord -prcap} command. 1062 1063 @return: Results tuple as described above. 1064 @raise IOError: If there is problem parsing the output. 1065 """ 1066 deviceType = None 1067 deviceVendor = None 1068 deviceId = None 1069 deviceBufferSize = None 1070 deviceSupportsMulti = False 1071 deviceHasTray = False 1072 deviceCanEject = False 1073 typePattern = re.compile(r"(^Device type\s*:\s*)(.*)(\s*)(.*$)") 1074 vendorPattern = re.compile(r"(^Vendor_info\s*:\s*'\s*)(.*?)(\s*')(.*$)") 1075 idPattern = re.compile(r"(^Identifikation\s*:\s*'\s*)(.*?)(\s*')(.*$)") 1076 bufferPattern = re.compile(r"(^\s*Buffer size in KB:\s*)(.*?)(\s*$)") 1077 multiPattern = re.compile(r"^\s*Does read multi-session.*$") 1078 trayPattern = re.compile(r"^\s*Loading mechanism type: tray.*$") 1079 ejectPattern = re.compile(r"^\s*Does support ejection.*$") 1080 for line in output: 1081 if typePattern.search(line): 1082 deviceType = typePattern.search(line).group(2) 1083 logger.info("Device type is [%s].", deviceType) 1084 elif vendorPattern.search(line): 1085 deviceVendor = vendorPattern.search(line).group(2) 1086 logger.info("Device vendor is [%s].", deviceVendor) 1087 elif idPattern.search(line): 1088 deviceId = idPattern.search(line).group(2) 1089 logger.info("Device id is [%s].", deviceId) 1090 elif bufferPattern.search(line): 1091 try: 1092 sectors = int(bufferPattern.search(line).group(2)) 1093 deviceBufferSize = convertSize(sectors, UNIT_KBYTES, UNIT_BYTES) 1094 logger.info("Device buffer size is [%d] bytes.", deviceBufferSize) 1095 except TypeError: pass 1096 elif multiPattern.search(line): 1097 deviceSupportsMulti = True 1098 logger.info("Device does support multisession discs.") 1099 elif trayPattern.search(line): 1100 deviceHasTray = True 1101 logger.info("Device has a tray.") 1102 elif ejectPattern.search(line): 1103 deviceCanEject = True 1104 logger.info("Device can eject its media.") 1105 return (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject)
    1106 1107 @staticmethod
    1108 - def _parseBoundariesOutput(output):
    1109 """ 1110 Parses the output from a C{cdrecord} capacity command. 1111 1112 The C{output} parameter should be a list of strings as returned from 1113 C{executeCommand} for a C{cdrecord} command with arguments as from 1114 C{_buildBoundaryArgs}. The list of strings will be parsed to yield 1115 information about the capacity of the media in the device. 1116 1117 Basically, we expect the list of strings to include just one line, a pair 1118 of values. There isn't supposed to be whitespace, but we allow it anyway 1119 in the regular expression. Any lines below the one line we parse are 1120 completely ignored. It would be a good idea to ignore C{stderr} when 1121 executing the C{cdrecord} command that generates output for this method, 1122 because sometimes C{cdrecord} spits out kernel warnings about the actual 1123 output. 1124 1125 The results are returned as a tuple of (lower, upper) as needed by the 1126 C{IsoImage} class. Note that these values are in terms of ISO sectors, 1127 not bytes. Clients should generally consider the boundaries value 1128 opaque, however. 1129 1130 @note: If the boundaries output can't be parsed, we return C{None}. 1131 1132 @param output: Output from a C{cdrecord -msinfo} command. 1133 1134 @return: Boundaries tuple as described above. 1135 @raise IOError: If there is problem parsing the output. 1136 """ 1137 if len(output) < 1: 1138 logger.warn("Unable to read disc (might not be initialized); returning full capacity.") 1139 return None 1140 boundaryPattern = re.compile(r"(^\s*)([0-9]*)(\s*,\s*)([0-9]*)(\s*$)") 1141 parsed = boundaryPattern.search(output[0]) 1142 if not parsed: 1143 raise IOError("Unable to parse output of boundaries command.") 1144 try: 1145 boundaries = ( int(parsed.group(2)), int(parsed.group(4)) ) 1146 except TypeError: 1147 raise IOError("Unable to parse output of boundaries command.") 1148 return boundaries
    1149 1150 1151 ################################# 1152 # Methods used to build commands 1153 ################################# 1154 1155 @staticmethod
    1156 - def _buildOpenTrayArgs(device):
    1157 """ 1158 Builds a list of arguments to be passed to a C{eject} command. 1159 1160 The arguments will cause the C{eject} command to open the tray and 1161 eject the media. No validation is done by this method as to whether 1162 this action actually makes sense. 1163 1164 @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. 1165 1166 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1167 """ 1168 args = [] 1169 args.append(device) 1170 return args
    1171 1172 @staticmethod
    1173 - def _buildUnlockTrayArgs(device):
    1174 """ 1175 Builds a list of arguments to be passed to a C{eject} command. 1176 1177 The arguments will cause the C{eject} command to unlock the tray. 1178 1179 @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. 1180 1181 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1182 """ 1183 args = [] 1184 args.append("-i") 1185 args.append("off") 1186 args.append(device) 1187 return args
    1188 1189 @staticmethod
    1190 - def _buildCloseTrayArgs(device):
    1191 """ 1192 Builds a list of arguments to be passed to a C{eject} command. 1193 1194 The arguments will cause the C{eject} command to close the tray and reload 1195 the media. No validation is done by this method as to whether this 1196 action actually makes sense. 1197 1198 @param device: Filesystem device name for this writer, i.e. C{/dev/cdrw}. 1199 1200 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1201 """ 1202 args = [] 1203 args.append("-t") 1204 args.append(device) 1205 return args
    1206 1207 @staticmethod
    1208 - def _buildPropertiesArgs(hardwareId):
    1209 """ 1210 Builds a list of arguments to be passed to a C{cdrecord} command. 1211 1212 The arguments will cause the C{cdrecord} command to ask the device 1213 for a list of its capacities via the C{-prcap} switch. 1214 1215 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1216 1217 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1218 """ 1219 args = [] 1220 args.append("-prcap") 1221 args.append("dev=%s" % hardwareId) 1222 return args
    1223 1224 @staticmethod
    1225 - def _buildBoundariesArgs(hardwareId):
    1226 """ 1227 Builds a list of arguments to be passed to a C{cdrecord} command. 1228 1229 The arguments will cause the C{cdrecord} command to ask the device for 1230 the current multisession boundaries of the media using the C{-msinfo} 1231 switch. 1232 1233 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1234 1235 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1236 """ 1237 args = [] 1238 args.append("-msinfo") 1239 args.append("dev=%s" % hardwareId) 1240 return args
    1241 1242 @staticmethod
    1243 - def _buildBlankArgs(hardwareId, driveSpeed=None):
    1244 """ 1245 Builds a list of arguments to be passed to a C{cdrecord} command. 1246 1247 The arguments will cause the C{cdrecord} command to blank the media in 1248 the device identified by C{hardwareId}. No validation is done by this method 1249 as to whether the action makes sense (i.e. to whether the media even can 1250 be blanked). 1251 1252 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1253 @param driveSpeed: Speed at which the drive writes. 1254 1255 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1256 """ 1257 args = [] 1258 args.append("-v") 1259 args.append("blank=fast") 1260 if driveSpeed is not None: 1261 args.append("speed=%d" % driveSpeed) 1262 args.append("dev=%s" % hardwareId) 1263 return args
    1264 1265 @staticmethod
    1266 - def _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True):
    1267 """ 1268 Builds a list of arguments to be passed to a C{cdrecord} command. 1269 1270 The arguments will cause the C{cdrecord} command to write the indicated 1271 ISO image (C{imagePath}) to the media in the device identified by 1272 C{hardwareId}. The C{writeMulti} argument controls whether to write a 1273 multisession disc. No validation is done by this method as to whether 1274 the action makes sense (i.e. to whether the device even can write 1275 multisession discs, for instance). 1276 1277 @param hardwareId: Hardware id for the device (either SCSI id or device path) 1278 @param imagePath: Path to an ISO image on disk. 1279 @param driveSpeed: Speed at which the drive writes. 1280 @param writeMulti: Indicates whether to write a multisession disc. 1281 1282 @return: List suitable for passing to L{util.executeCommand} as C{args}. 1283 """ 1284 args = [] 1285 args.append("-v") 1286 if driveSpeed is not None: 1287 args.append("speed=%d" % driveSpeed) 1288 args.append("dev=%s" % hardwareId) 1289 if writeMulti: 1290 args.append("-multi") 1291 args.append("-data") 1292 args.append(imagePath) 1293 return args
    1294

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.AbsolutePathList-class.html0000664000175000017500000003640112642035644030716 0ustar pronovicpronovic00000000000000 CedarBackup2.util.AbsolutePathList
    Package CedarBackup2 :: Module util :: Class AbsolutePathList
    [hide private]
    [frames] | no frames]

    Class AbsolutePathList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    AbsolutePathList
    

    Class representing a list of absolute paths.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list is an absolute path.

    Each item added to the list is encoded using encodePath. If we don't do this, we have problems trying certain operations between strings and unicode objects, particularly for "odd" filenames that can't be encoded in standard ASCII.

    Instance Methods [hide private]
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __init__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If any item is not an absolute path.
    Overrides: list.extend

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.rebuild-module.html0000664000175000017500000003060012642035643027752 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.rebuild
    Package CedarBackup2 :: Package actions :: Module rebuild
    [hide private]
    [frames] | no frames]

    Module rebuild

    source code

    Implements the standard 'rebuild' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeRebuild(configPath, options, config)
    Executes the rebuild backup action.
    source code
     
    _findRebuildDirs(config)
    Finds the set of directories to be included in a disc rebuild.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.rebuild")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeRebuild(configPath, options, config)

    source code 

    Executes the rebuild backup action.

    This function exists mainly to recreate a disc that has been "trashed" due to media or hardware problems. Note that the "stage complete" indicator isn't checked for this action.

    Note that the rebuild action and the store action are very similar. The main difference is that while store only stores a single day's staging directory, the rebuild action operates on multiple staging directories.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are problems reading or writing files.

    _findRebuildDirs(config)

    source code 

    Finds the set of directories to be included in a disc rebuild.

    A the rebuild action is supposed to recreate the "last week's" disc. This won't always be possible if some of the staging directories are missing. However, the general procedure is to look back into the past no further than the previous "starting day of week", and then work forward from there trying to find all of the staging directories between then and now that still exist and have a stage indicator.

    Parameters:
    • config - Config object.
    Returns:
    Correct staging dir, as a dict mapping directory to date suffix.
    Raises:
    • IOError - If we do not find at least one staging directory.

    CedarBackup2-2.26.5/doc/interface/toc-everything.html0000664000175000017500000022042212642035643024127 0ustar pronovicpronovic00000000000000 Everything

    Everything


    All Classes

    CedarBackup2.cli.Options
    CedarBackup2.config.ActionDependencies
    CedarBackup2.config.ActionHook
    CedarBackup2.config.BlankBehavior
    CedarBackup2.config.ByteQuantity
    CedarBackup2.config.CollectConfig
    CedarBackup2.config.CollectDir
    CedarBackup2.config.CollectFile
    CedarBackup2.config.CommandOverride
    CedarBackup2.config.Config
    CedarBackup2.config.ExtendedAction
    CedarBackup2.config.ExtensionsConfig
    CedarBackup2.config.LocalPeer
    CedarBackup2.config.OptionsConfig
    CedarBackup2.config.PeersConfig
    CedarBackup2.config.PostActionHook
    CedarBackup2.config.PreActionHook
    CedarBackup2.config.PurgeConfig
    CedarBackup2.config.PurgeDir
    CedarBackup2.config.ReferenceConfig
    CedarBackup2.config.RemotePeer
    CedarBackup2.config.StageConfig
    CedarBackup2.config.StoreConfig
    CedarBackup2.extend.amazons3.AmazonS3Config
    CedarBackup2.extend.amazons3.LocalConfig
    CedarBackup2.extend.capacity.CapacityConfig
    CedarBackup2.extend.capacity.LocalConfig
    CedarBackup2.extend.capacity.PercentageQuantity
    CedarBackup2.extend.encrypt.EncryptConfig
    CedarBackup2.extend.encrypt.LocalConfig
    CedarBackup2.extend.mbox.LocalConfig
    CedarBackup2.extend.mbox.MboxConfig
    CedarBackup2.extend.mbox.MboxDir
    CedarBackup2.extend.mbox.MboxFile
    CedarBackup2.extend.mysql.LocalConfig
    CedarBackup2.extend.mysql.MysqlConfig
    CedarBackup2.extend.postgresql.LocalConfig
    CedarBackup2.extend.postgresql.PostgresqlConfig
    CedarBackup2.extend.split.LocalConfig
    CedarBackup2.extend.split.SplitConfig
    CedarBackup2.extend.subversion.BDBRepository
    CedarBackup2.extend.subversion.FSFSRepository
    CedarBackup2.extend.subversion.LocalConfig
    CedarBackup2.extend.subversion.Repository
    CedarBackup2.extend.subversion.RepositoryDir
    CedarBackup2.extend.subversion.SubversionConfig
    CedarBackup2.filesystem.BackupFileList
    CedarBackup2.filesystem.FilesystemList
    CedarBackup2.filesystem.PurgeItemList
    CedarBackup2.filesystem.SpanItem
    CedarBackup2.peer.LocalPeer
    CedarBackup2.peer.RemotePeer
    CedarBackup2.tools.amazons3.Options
    CedarBackup2.tools.span.SpanOptions
    CedarBackup2.util.AbsolutePathList
    CedarBackup2.util.Diagnostics
    CedarBackup2.util.DirectedGraph
    CedarBackup2.util.ObjectTypeList
    CedarBackup2.util.PathResolverSingleton
    CedarBackup2.util.Pipe
    CedarBackup2.util.RegexList
    CedarBackup2.util.RegexMatchList
    CedarBackup2.util.RestrictedContentList
    CedarBackup2.util.UnorderedList
    CedarBackup2.writers.cdwriter.CdWriter
    CedarBackup2.writers.cdwriter.MediaCapacity
    CedarBackup2.writers.cdwriter.MediaDefinition
    CedarBackup2.writers.dvdwriter.DvdWriter
    CedarBackup2.writers.dvdwriter.MediaCapacity
    CedarBackup2.writers.dvdwriter.MediaDefinition
    CedarBackup2.writers.util.IsoImage
    CedarBackup2.xmlutil.Serializer

    All Functions

    CedarBackup2.actions.collect.executeCollect
    CedarBackup2.actions.initialize.executeInitialize
    CedarBackup2.actions.purge.executePurge
    CedarBackup2.actions.rebuild.executeRebuild
    CedarBackup2.actions.stage.executeStage
    CedarBackup2.actions.store.consistencyCheck
    CedarBackup2.actions.store.executeStore
    CedarBackup2.actions.store.writeImage
    CedarBackup2.actions.store.writeImageBlankSafe
    CedarBackup2.actions.store.writeStoreIndicator
    CedarBackup2.actions.util.buildMediaLabel
    CedarBackup2.actions.util.checkMediaState
    CedarBackup2.actions.util.createWriter
    CedarBackup2.actions.util.findDailyDirs
    CedarBackup2.actions.util.getBackupFiles
    CedarBackup2.actions.util.initializeMediaState
    CedarBackup2.actions.util.writeIndicatorFile
    CedarBackup2.actions.validate.executeValidate
    CedarBackup2.cli.cli
    CedarBackup2.cli.setupLogging
    CedarBackup2.cli.setupPathResolver
    CedarBackup2.config.addByteQuantityNode
    CedarBackup2.config.readByteQuantity
    CedarBackup2.customize.customizeOverrides
    CedarBackup2.extend.amazons3.executeAction
    CedarBackup2.extend.capacity.executeAction
    CedarBackup2.extend.encrypt.executeAction
    CedarBackup2.extend.mbox.executeAction
    CedarBackup2.extend.mysql.backupDatabase
    CedarBackup2.extend.mysql.executeAction
    CedarBackup2.extend.postgresql.backupDatabase
    CedarBackup2.extend.postgresql.executeAction
    CedarBackup2.extend.split.executeAction
    CedarBackup2.extend.subversion.backupBDBRepository
    CedarBackup2.extend.subversion.backupFSFSRepository
    CedarBackup2.extend.subversion.backupRepository
    CedarBackup2.extend.subversion.executeAction
    CedarBackup2.extend.subversion.getYoungestRevision
    CedarBackup2.extend.sysinfo.executeAction
    CedarBackup2.filesystem.compareContents
    CedarBackup2.filesystem.compareDigestMaps
    CedarBackup2.filesystem.normalizeDir
    CedarBackup2.knapsack.alternateFit
    CedarBackup2.knapsack.bestFit
    CedarBackup2.knapsack.firstFit
    CedarBackup2.knapsack.worstFit
    CedarBackup2.testutil.availableLocales
    CedarBackup2.testutil.buildPath
    CedarBackup2.testutil.captureOutput
    CedarBackup2.testutil.changeFileAge
    CedarBackup2.testutil.commandAvailable
    CedarBackup2.testutil.extractTar
    CedarBackup2.testutil.failUnlessAssignRaises
    CedarBackup2.testutil.findResources
    CedarBackup2.testutil.getLogin
    CedarBackup2.testutil.getMaskAsMode
    CedarBackup2.testutil.hexFloatLiteralAllowed
    CedarBackup2.testutil.platformCygwin
    CedarBackup2.testutil.platformDebian
    CedarBackup2.testutil.platformHasEcho
    CedarBackup2.testutil.platformMacOsX
    CedarBackup2.testutil.platformRequiresBinaryRead
    CedarBackup2.testutil.platformSupportsLinks
    CedarBackup2.testutil.platformSupportsPermissions
    CedarBackup2.testutil.platformWindows
    CedarBackup2.testutil.randomFilename
    CedarBackup2.testutil.removedir
    CedarBackup2.testutil.runningAsRoot
    CedarBackup2.testutil.setupDebugLogger
    CedarBackup2.testutil.setupOverrides
    CedarBackup2.tools.amazons3.cli
    CedarBackup2.tools.span.cli
    CedarBackup2.util.buildNormalizedPath
    CedarBackup2.util.calculateFileAge
    CedarBackup2.util.changeOwnership
    CedarBackup2.util.checkUnique
    CedarBackup2.util.convertSize
    CedarBackup2.util.dereferenceLink
    CedarBackup2.util.deriveDayOfWeek
    CedarBackup2.util.deviceMounted
    CedarBackup2.util.displayBytes
    CedarBackup2.util.encodePath
    CedarBackup2.util.executeCommand
    CedarBackup2.util.getFunctionReference
    CedarBackup2.util.getUidGid
    CedarBackup2.util.isRunningAsRoot
    CedarBackup2.util.isStartOfWeek
    CedarBackup2.util.mount
    CedarBackup2.util.nullDevice
    CedarBackup2.util.parseCommaSeparatedString
    CedarBackup2.util.removeKeys
    CedarBackup2.util.resolveCommand
    CedarBackup2.util.sanitizeEnvironment
    CedarBackup2.util.sortDict
    CedarBackup2.util.splitCommandLine
    CedarBackup2.util.unmount
    CedarBackup2.writers.util.readMediaLabel
    CedarBackup2.writers.util.validateDevice
    CedarBackup2.writers.util.validateDriveSpeed
    CedarBackup2.writers.util.validateScsiId
    CedarBackup2.xmlutil.addBooleanNode
    CedarBackup2.xmlutil.addContainerNode
    CedarBackup2.xmlutil.addIntegerNode
    CedarBackup2.xmlutil.addLongNode
    CedarBackup2.xmlutil.addStringNode
    CedarBackup2.xmlutil.createInputDom
    CedarBackup2.xmlutil.createOutputDom
    CedarBackup2.xmlutil.isElement
    CedarBackup2.xmlutil.readBoolean
    CedarBackup2.xmlutil.readChildren
    CedarBackup2.xmlutil.readFirstChild
    CedarBackup2.xmlutil.readFloat
    CedarBackup2.xmlutil.readInteger
    CedarBackup2.xmlutil.readLong
    CedarBackup2.xmlutil.readString
    CedarBackup2.xmlutil.readStringList
    CedarBackup2.xmlutil.serializeDom

    All Variables

    CedarBackup2.action.__package__
    CedarBackup2.actions.collect.__package__
    CedarBackup2.actions.collect.logger
    CedarBackup2.actions.constants.COLLECT_INDICATOR
    CedarBackup2.actions.constants.DIGEST_EXTENSION
    CedarBackup2.actions.constants.DIR_TIME_FORMAT
    CedarBackup2.actions.constants.INDICATOR_PATTERN
    CedarBackup2.actions.constants.STAGE_INDICATOR
    CedarBackup2.actions.constants.STORE_INDICATOR
    CedarBackup2.actions.constants.__package__
    CedarBackup2.actions.initialize.__package__
    CedarBackup2.actions.initialize.logger
    CedarBackup2.actions.purge.__package__
    CedarBackup2.actions.purge.logger
    CedarBackup2.actions.rebuild.__package__
    CedarBackup2.actions.rebuild.logger
    CedarBackup2.actions.stage.__package__
    CedarBackup2.actions.stage.logger
    CedarBackup2.actions.store.__package__
    CedarBackup2.actions.store.logger
    CedarBackup2.actions.util.MEDIA_LABEL_PREFIX
    CedarBackup2.actions.util.__package__
    CedarBackup2.actions.util.logger
    CedarBackup2.actions.validate.__package__
    CedarBackup2.actions.validate.logger
    CedarBackup2.cli.COLLECT_INDEX
    CedarBackup2.cli.COMBINE_ACTIONS
    CedarBackup2.cli.DATE_FORMAT
    CedarBackup2.cli.DEFAULT_CONFIG
    CedarBackup2.cli.DEFAULT_LOGFILE
    CedarBackup2.cli.DEFAULT_MODE
    CedarBackup2.cli.DEFAULT_OWNERSHIP
    CedarBackup2.cli.DISK_LOG_FORMAT
    CedarBackup2.cli.DISK_OUTPUT_FORMAT
    CedarBackup2.cli.INITIALIZE_INDEX
    CedarBackup2.cli.LONG_SWITCHES
    CedarBackup2.cli.NONCOMBINE_ACTIONS
    CedarBackup2.cli.PURGE_INDEX
    CedarBackup2.cli.REBUILD_INDEX
    CedarBackup2.cli.SCREEN_LOG_FORMAT
    CedarBackup2.cli.SCREEN_LOG_STREAM
    CedarBackup2.cli.SHORT_SWITCHES
    CedarBackup2.cli.STAGE_INDEX
    CedarBackup2.cli.STORE_INDEX
    CedarBackup2.cli.VALIDATE_INDEX
    CedarBackup2.cli.VALID_ACTIONS
    CedarBackup2.cli.__package__
    CedarBackup2.cli.logger
    CedarBackup2.config.ACTION_NAME_REGEX
    CedarBackup2.config.DEFAULT_DEVICE_TYPE
    CedarBackup2.config.DEFAULT_MEDIA_TYPE
    CedarBackup2.config.REWRITABLE_MEDIA_TYPES
    CedarBackup2.config.VALID_ARCHIVE_MODES
    CedarBackup2.config.VALID_BLANK_MODES
    CedarBackup2.config.VALID_BYTE_UNITS
    CedarBackup2.config.VALID_CD_MEDIA_TYPES
    CedarBackup2.config.VALID_COLLECT_MODES
    CedarBackup2.config.VALID_COMPRESS_MODES
    CedarBackup2.config.VALID_DEVICE_TYPES
    CedarBackup2.config.VALID_DVD_MEDIA_TYPES
    CedarBackup2.config.VALID_FAILURE_MODES
    CedarBackup2.config.VALID_MEDIA_TYPES
    CedarBackup2.config.VALID_ORDER_MODES
    CedarBackup2.config.__package__
    CedarBackup2.config.logger
    CedarBackup2.customize.DEBIAN_CDRECORD
    CedarBackup2.customize.DEBIAN_MKISOFS
    CedarBackup2.customize.PLATFORM
    CedarBackup2.customize.__package__
    CedarBackup2.customize.logger
    CedarBackup2.extend.amazons3.AWS_COMMAND
    CedarBackup2.extend.amazons3.STORE_INDICATOR
    CedarBackup2.extend.amazons3.SU_COMMAND
    CedarBackup2.extend.amazons3.__package__
    CedarBackup2.extend.amazons3.logger
    CedarBackup2.extend.capacity.__package__
    CedarBackup2.extend.capacity.logger
    CedarBackup2.extend.encrypt.ENCRYPT_INDICATOR
    CedarBackup2.extend.encrypt.GPG_COMMAND
    CedarBackup2.extend.encrypt.VALID_ENCRYPT_MODES
    CedarBackup2.extend.encrypt.__package__
    CedarBackup2.extend.encrypt.logger
    CedarBackup2.extend.mbox.GREPMAIL_COMMAND
    CedarBackup2.extend.mbox.REVISION_PATH_EXTENSION
    CedarBackup2.extend.mbox.__package__
    CedarBackup2.extend.mbox.logger
    CedarBackup2.extend.mysql.MYSQLDUMP_COMMAND
    CedarBackup2.extend.mysql.__package__
    CedarBackup2.extend.mysql.logger
    CedarBackup2.extend.postgresql.POSTGRESQLDUMPALL_COMMAND
    CedarBackup2.extend.postgresql.POSTGRESQLDUMP_COMMAND
    CedarBackup2.extend.postgresql.__package__
    CedarBackup2.extend.postgresql.logger
    CedarBackup2.extend.split.SPLIT_COMMAND
    CedarBackup2.extend.split.SPLIT_INDICATOR
    CedarBackup2.extend.split.__package__
    CedarBackup2.extend.split.logger
    CedarBackup2.extend.subversion.REVISION_PATH_EXTENSION
    CedarBackup2.extend.subversion.SVNADMIN_COMMAND
    CedarBackup2.extend.subversion.SVNLOOK_COMMAND
    CedarBackup2.extend.subversion.__package__
    CedarBackup2.extend.subversion.logger
    CedarBackup2.extend.sysinfo.DPKG_COMMAND
    CedarBackup2.extend.sysinfo.DPKG_PATH
    CedarBackup2.extend.sysinfo.FDISK_COMMAND
    CedarBackup2.extend.sysinfo.FDISK_PATH
    CedarBackup2.extend.sysinfo.LS_COMMAND
    CedarBackup2.extend.sysinfo.__package__
    CedarBackup2.extend.sysinfo.logger
    CedarBackup2.filesystem.__package__
    CedarBackup2.filesystem.logger
    CedarBackup2.image.__package__
    CedarBackup2.knapsack.__package__
    CedarBackup2.peer.DEF_CBACK_COMMAND
    CedarBackup2.peer.DEF_COLLECT_INDICATOR
    CedarBackup2.peer.DEF_RCP_COMMAND
    CedarBackup2.peer.DEF_RSH_COMMAND
    CedarBackup2.peer.DEF_STAGE_INDICATOR
    CedarBackup2.peer.SU_COMMAND
    CedarBackup2.peer.__package__
    CedarBackup2.peer.logger
    CedarBackup2.release.AUTHOR
    CedarBackup2.release.COPYRIGHT
    CedarBackup2.release.DATE
    CedarBackup2.release.EMAIL
    CedarBackup2.release.URL
    CedarBackup2.release.VERSION
    CedarBackup2.release.__package__
    CedarBackup2.testutil.__package__
    CedarBackup2.tools.amazons3.AWS_COMMAND
    CedarBackup2.tools.amazons3.LONG_SWITCHES
    CedarBackup2.tools.amazons3.SHORT_SWITCHES
    CedarBackup2.tools.amazons3.__package__
    CedarBackup2.tools.amazons3.logger
    CedarBackup2.tools.span.__package__
    CedarBackup2.tools.span.logger
    CedarBackup2.util.BYTES_PER_GBYTE
    CedarBackup2.util.BYTES_PER_KBYTE
    CedarBackup2.util.BYTES_PER_MBYTE
    CedarBackup2.util.BYTES_PER_SECTOR
    CedarBackup2.util.DEFAULT_LANGUAGE
    CedarBackup2.util.HOURS_PER_DAY
    CedarBackup2.util.ISO_SECTOR_SIZE
    CedarBackup2.util.KBYTES_PER_MBYTE
    CedarBackup2.util.LANG_VAR
    CedarBackup2.util.LOCALE_VARS
    CedarBackup2.util.MBYTES_PER_GBYTE
    CedarBackup2.util.MINUTES_PER_HOUR
    CedarBackup2.util.MOUNT_COMMAND
    CedarBackup2.util.MTAB_FILE
    CedarBackup2.util.SECONDS_PER_DAY
    CedarBackup2.util.SECONDS_PER_MINUTE
    CedarBackup2.util.UMOUNT_COMMAND
    CedarBackup2.util.UNIT_BYTES
    CedarBackup2.util.UNIT_GBYTES
    CedarBackup2.util.UNIT_KBYTES
    CedarBackup2.util.UNIT_MBYTES
    CedarBackup2.util.UNIT_SECTORS
    CedarBackup2.util.__package__
    CedarBackup2.util.logger
    CedarBackup2.util.outputLogger
    CedarBackup2.writer.__package__
    CedarBackup2.writers.cdwriter.CDRECORD_COMMAND
    CedarBackup2.writers.cdwriter.EJECT_COMMAND
    CedarBackup2.writers.cdwriter.MEDIA_CDRW_74
    CedarBackup2.writers.cdwriter.MEDIA_CDRW_80
    CedarBackup2.writers.cdwriter.MEDIA_CDR_74
    CedarBackup2.writers.cdwriter.MEDIA_CDR_80
    CedarBackup2.writers.cdwriter.MKISOFS_COMMAND
    CedarBackup2.writers.cdwriter.__package__
    CedarBackup2.writers.cdwriter.logger
    CedarBackup2.writers.dvdwriter.EJECT_COMMAND
    CedarBackup2.writers.dvdwriter.GROWISOFS_COMMAND
    CedarBackup2.writers.dvdwriter.MEDIA_DVDPLUSR
    CedarBackup2.writers.dvdwriter.MEDIA_DVDPLUSRW
    CedarBackup2.writers.dvdwriter.__package__
    CedarBackup2.writers.dvdwriter.logger
    CedarBackup2.writers.util.MKISOFS_COMMAND
    CedarBackup2.writers.util.VOLNAME_COMMAND
    CedarBackup2.writers.util.__package__
    CedarBackup2.writers.util.logger
    CedarBackup2.xmlutil.FALSE_BOOLEAN_VALUES
    CedarBackup2.xmlutil.TRUE_BOOLEAN_VALUES
    CedarBackup2.xmlutil.VALID_BOOLEAN_VALUES
    CedarBackup2.xmlutil.__package__
    CedarBackup2.xmlutil.logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.tools.amazons3-pysrc.html0000664000175000017500000133204112642035646027442 0ustar pronovicpronovic00000000000000 CedarBackup2.tools.amazons3
    Package CedarBackup2 :: Package tools :: Module amazons3
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.tools.amazons3

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2014 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 2 (>= 2.7) 
      29  # Project  : Cedar Backup, release 2 
      30  # Purpose  : Cedar Backup tool to synchronize an Amazon S3 bucket. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Notes 
      36  ######################################################################## 
      37   
      38  """ 
      39  Synchonizes a local directory with an Amazon S3 bucket. 
      40   
      41  No configuration is required; all necessary information is taken from the 
      42  command-line.  The only thing configuration would help with is the path 
      43  resolver interface, and it doesn't seem worth it to require configuration just 
      44  to get that. 
      45   
      46  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      47  """ 
      48   
      49  ######################################################################## 
      50  # Imported modules and constants 
      51  ######################################################################## 
      52   
      53  # System modules 
      54  import sys 
      55  import os 
      56  import logging 
      57  import getopt 
      58  import json 
      59  import warnings 
      60  import chardet 
      61   
      62  # Cedar Backup modules 
      63  from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT 
      64  from CedarBackup2.filesystem import FilesystemList 
      65  from CedarBackup2.cli import setupLogging, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, DEFAULT_MODE 
      66  from CedarBackup2.util import Diagnostics, splitCommandLine, encodePath 
      67  from CedarBackup2.util import executeCommand 
      68   
      69   
      70  ######################################################################## 
      71  # Module-wide constants and variables 
      72  ######################################################################## 
      73   
      74  logger = logging.getLogger("CedarBackup2.log.tools.amazons3") 
      75   
      76  AWS_COMMAND   = [ "aws" ] 
      77   
      78  SHORT_SWITCHES     = "hVbql:o:m:OdsDvw" 
      79  LONG_SWITCHES      = [ 'help', 'version', 'verbose', 'quiet', 
      80                         'logfile=', 'owner=', 'mode=', 
      81                         'output', 'debug', 'stack', 'diagnostics', 
      82                         'verifyOnly', 'ignoreWarnings', ] 
      83   
      84   
      85  ####################################################################### 
      86  # Options class 
      87  ####################################################################### 
      88   
    
    89 -class Options(object):
    90 91 ###################### 92 # Class documentation 93 ###################### 94 95 """ 96 Class representing command-line options for the cback-amazons3-sync script. 97 98 The C{Options} class is a Python object representation of the command-line 99 options of the cback script. 100 101 The object representation is two-way: a command line string or a list of 102 command line arguments can be used to create an C{Options} object, and then 103 changes to the object can be propogated back to a list of command-line 104 arguments or to a command-line string. An C{Options} object can even be 105 created from scratch programmatically (if you have a need for that). 106 107 There are two main levels of validation in the C{Options} class. The first 108 is field-level validation. Field-level validation comes into play when a 109 given field in an object is assigned to or updated. We use Python's 110 C{property} functionality to enforce specific validations on field values, 111 and in some places we even use customized list classes to enforce 112 validations on list members. You should expect to catch a C{ValueError} 113 exception when making assignments to fields if you are programmatically 114 filling an object. 115 116 The second level of validation is post-completion validation. Certain 117 validations don't make sense until an object representation of options is 118 fully "complete". We don't want these validations to apply all of the time, 119 because it would make building up a valid object from scratch a real pain. 120 For instance, we might have to do things in the right order to keep from 121 throwing exceptions, etc. 122 123 All of these post-completion validations are encapsulated in the 124 L{Options.validate} method. This method can be called at any time by a 125 client, and will always be called immediately after creating a C{Options} 126 object from a command line and before exporting a C{Options} object back to 127 a command line. This way, we get acceptable ease-of-use but we also don't 128 accept or emit invalid command lines. 129 130 @note: Lists within this class are "unordered" for equality comparisons. 131 132 @sort: __init__, __repr__, __str__, __cmp__ 133 """ 134 135 ############## 136 # Constructor 137 ############## 138
    139 - def __init__(self, argumentList=None, argumentString=None, validate=True):
    140 """ 141 Initializes an options object. 142 143 If you initialize the object without passing either C{argumentList} or 144 C{argumentString}, the object will be empty and will be invalid until it 145 is filled in properly. 146 147 No reference to the original arguments is saved off by this class. Once 148 the data has been parsed (successfully or not) this original information 149 is discarded. 150 151 The argument list is assumed to be a list of arguments, not including the 152 name of the command, something like C{sys.argv[1:]}. If you pass 153 C{sys.argv} instead, things are not going to work. 154 155 The argument string will be parsed into an argument list by the 156 L{util.splitCommandLine} function (see the documentation for that 157 function for some important notes about its limitations). There is an 158 assumption that the resulting list will be equivalent to C{sys.argv[1:]}, 159 just like C{argumentList}. 160 161 Unless the C{validate} argument is C{False}, the L{Options.validate} 162 method will be called (with its default arguments) after successfully 163 parsing any passed-in command line. This validation ensures that 164 appropriate actions, etc. have been specified. Keep in mind that even if 165 C{validate} is C{False}, it might not be possible to parse the passed-in 166 command line, so an exception might still be raised. 167 168 @note: The command line format is specified by the L{_usage} function. 169 Call L{_usage} to see a usage statement for the cback script. 170 171 @note: It is strongly suggested that the C{validate} option always be set 172 to C{True} (the default) unless there is a specific need to read in 173 invalid command line arguments. 174 175 @param argumentList: Command line for a program. 176 @type argumentList: List of arguments, i.e. C{sys.argv} 177 178 @param argumentString: Command line for a program. 179 @type argumentString: String, i.e. "cback --verbose stage store" 180 181 @param validate: Validate the command line after parsing it. 182 @type validate: Boolean true/false. 183 184 @raise getopt.GetoptError: If the command-line arguments could not be parsed. 185 @raise ValueError: If the command-line arguments are invalid. 186 """ 187 self._help = False 188 self._version = False 189 self._verbose = False 190 self._quiet = False 191 self._logfile = None 192 self._owner = None 193 self._mode = None 194 self._output = False 195 self._debug = False 196 self._stacktrace = False 197 self._diagnostics = False 198 self._verifyOnly = False 199 self._ignoreWarnings = False 200 self._sourceDir = None 201 self._s3BucketUrl = None 202 if argumentList is not None and argumentString is not None: 203 raise ValueError("Use either argumentList or argumentString, but not both.") 204 if argumentString is not None: 205 argumentList = splitCommandLine(argumentString) 206 if argumentList is not None: 207 self._parseArgumentList(argumentList) 208 if validate: 209 self.validate()
    210 211 212 ######################### 213 # String representations 214 ######################### 215
    216 - def __repr__(self):
    217 """ 218 Official string representation for class instance. 219 """ 220 return self.buildArgumentString(validate=False)
    221
    222 - def __str__(self):
    223 """ 224 Informal string representation for class instance. 225 """ 226 return self.__repr__()
    227 228 229 ############################# 230 # Standard comparison method 231 ############################# 232
    233 - def __cmp__(self, other):
    234 """ 235 Definition of equals operator for this class. 236 Lists within this class are "unordered" for equality comparisons. 237 @param other: Other object to compare to. 238 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 239 """ 240 if other is None: 241 return 1 242 if self.help != other.help: 243 if self.help < other.help: 244 return -1 245 else: 246 return 1 247 if self.version != other.version: 248 if self.version < other.version: 249 return -1 250 else: 251 return 1 252 if self.verbose != other.verbose: 253 if self.verbose < other.verbose: 254 return -1 255 else: 256 return 1 257 if self.quiet != other.quiet: 258 if self.quiet < other.quiet: 259 return -1 260 else: 261 return 1 262 if self.logfile != other.logfile: 263 if self.logfile < other.logfile: 264 return -1 265 else: 266 return 1 267 if self.owner != other.owner: 268 if self.owner < other.owner: 269 return -1 270 else: 271 return 1 272 if self.mode != other.mode: 273 if self.mode < other.mode: 274 return -1 275 else: 276 return 1 277 if self.output != other.output: 278 if self.output < other.output: 279 return -1 280 else: 281 return 1 282 if self.debug != other.debug: 283 if self.debug < other.debug: 284 return -1 285 else: 286 return 1 287 if self.stacktrace != other.stacktrace: 288 if self.stacktrace < other.stacktrace: 289 return -1 290 else: 291 return 1 292 if self.diagnostics != other.diagnostics: 293 if self.diagnostics < other.diagnostics: 294 return -1 295 else: 296 return 1 297 if self.verifyOnly != other.verifyOnly: 298 if self.verifyOnly < other.verifyOnly: 299 return -1 300 else: 301 return 1 302 if self.ignoreWarnings != other.ignoreWarnings: 303 if self.ignoreWarnings < other.ignoreWarnings: 304 return -1 305 else: 306 return 1 307 if self.sourceDir != other.sourceDir: 308 if self.sourceDir < other.sourceDir: 309 return -1 310 else: 311 return 1 312 if self.s3BucketUrl != other.s3BucketUrl: 313 if self.s3BucketUrl < other.s3BucketUrl: 314 return -1 315 else: 316 return 1 317 return 0
    318 319 320 ############# 321 # Properties 322 ############# 323
    324 - def _setHelp(self, value):
    325 """ 326 Property target used to set the help flag. 327 No validations, but we normalize the value to C{True} or C{False}. 328 """ 329 if value: 330 self._help = True 331 else: 332 self._help = False
    333
    334 - def _getHelp(self):
    335 """ 336 Property target used to get the help flag. 337 """ 338 return self._help
    339
    340 - def _setVersion(self, value):
    341 """ 342 Property target used to set the version flag. 343 No validations, but we normalize the value to C{True} or C{False}. 344 """ 345 if value: 346 self._version = True 347 else: 348 self._version = False
    349
    350 - def _getVersion(self):
    351 """ 352 Property target used to get the version flag. 353 """ 354 return self._version
    355
    356 - def _setVerbose(self, value):
    357 """ 358 Property target used to set the verbose flag. 359 No validations, but we normalize the value to C{True} or C{False}. 360 """ 361 if value: 362 self._verbose = True 363 else: 364 self._verbose = False
    365
    366 - def _getVerbose(self):
    367 """ 368 Property target used to get the verbose flag. 369 """ 370 return self._verbose
    371
    372 - def _setQuiet(self, value):
    373 """ 374 Property target used to set the quiet flag. 375 No validations, but we normalize the value to C{True} or C{False}. 376 """ 377 if value: 378 self._quiet = True 379 else: 380 self._quiet = False
    381
    382 - def _getQuiet(self):
    383 """ 384 Property target used to get the quiet flag. 385 """ 386 return self._quiet
    387
    388 - def _setLogfile(self, value):
    389 """ 390 Property target used to set the logfile parameter. 391 @raise ValueError: If the value cannot be encoded properly. 392 """ 393 if value is not None: 394 if len(value) < 1: 395 raise ValueError("The logfile parameter must be a non-empty string.") 396 self._logfile = encodePath(value)
    397
    398 - def _getLogfile(self):
    399 """ 400 Property target used to get the logfile parameter. 401 """ 402 return self._logfile
    403
    404 - def _setOwner(self, value):
    405 """ 406 Property target used to set the owner parameter. 407 If not C{None}, the owner must be a C{(user,group)} tuple or list. 408 Strings (and inherited children of strings) are explicitly disallowed. 409 The value will be normalized to a tuple. 410 @raise ValueError: If the value is not valid. 411 """ 412 if value is None: 413 self._owner = None 414 else: 415 if isinstance(value, str): 416 raise ValueError("Must specify user and group tuple for owner parameter.") 417 if len(value) != 2: 418 raise ValueError("Must specify user and group tuple for owner parameter.") 419 if len(value[0]) < 1 or len(value[1]) < 1: 420 raise ValueError("User and group tuple values must be non-empty strings.") 421 self._owner = (value[0], value[1])
    422
    423 - def _getOwner(self):
    424 """ 425 Property target used to get the owner parameter. 426 The parameter is a tuple of C{(user, group)}. 427 """ 428 return self._owner
    429
    430 - def _setMode(self, value):
    431 """ 432 Property target used to set the mode parameter. 433 """ 434 if value is None: 435 self._mode = None 436 else: 437 try: 438 if isinstance(value, str): 439 value = int(value, 8) 440 else: 441 value = int(value) 442 except TypeError: 443 raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") 444 if value < 0: 445 raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") 446 self._mode = value
    447
    448 - def _getMode(self):
    449 """ 450 Property target used to get the mode parameter. 451 """ 452 return self._mode
    453
    454 - def _setOutput(self, value):
    455 """ 456 Property target used to set the output flag. 457 No validations, but we normalize the value to C{True} or C{False}. 458 """ 459 if value: 460 self._output = True 461 else: 462 self._output = False
    463
    464 - def _getOutput(self):
    465 """ 466 Property target used to get the output flag. 467 """ 468 return self._output
    469
    470 - def _setDebug(self, value):
    471 """ 472 Property target used to set the debug flag. 473 No validations, but we normalize the value to C{True} or C{False}. 474 """ 475 if value: 476 self._debug = True 477 else: 478 self._debug = False
    479
    480 - def _getDebug(self):
    481 """ 482 Property target used to get the debug flag. 483 """ 484 return self._debug
    485
    486 - def _setStacktrace(self, value):
    487 """ 488 Property target used to set the stacktrace flag. 489 No validations, but we normalize the value to C{True} or C{False}. 490 """ 491 if value: 492 self._stacktrace = True 493 else: 494 self._stacktrace = False
    495
    496 - def _getStacktrace(self):
    497 """ 498 Property target used to get the stacktrace flag. 499 """ 500 return self._stacktrace
    501
    502 - def _setDiagnostics(self, value):
    503 """ 504 Property target used to set the diagnostics flag. 505 No validations, but we normalize the value to C{True} or C{False}. 506 """ 507 if value: 508 self._diagnostics = True 509 else: 510 self._diagnostics = False
    511
    512 - def _getDiagnostics(self):
    513 """ 514 Property target used to get the diagnostics flag. 515 """ 516 return self._diagnostics
    517
    518 - def _setVerifyOnly(self, value):
    519 """ 520 Property target used to set the verifyOnly flag. 521 No validations, but we normalize the value to C{True} or C{False}. 522 """ 523 if value: 524 self._verifyOnly = True 525 else: 526 self._verifyOnly = False
    527
    528 - def _getVerifyOnly(self):
    529 """ 530 Property target used to get the verifyOnly flag. 531 """ 532 return self._verifyOnly
    533
    534 - def _setIgnoreWarnings(self, value):
    535 """ 536 Property target used to set the ignoreWarnings flag. 537 No validations, but we normalize the value to C{True} or C{False}. 538 """ 539 if value: 540 self._ignoreWarnings = True 541 else: 542 self._ignoreWarnings = False
    543
    544 - def _getIgnoreWarnings(self):
    545 """ 546 Property target used to get the ignoreWarnings flag. 547 """ 548 return self._ignoreWarnings
    549
    550 - def _setSourceDir(self, value):
    551 """ 552 Property target used to set the sourceDir parameter. 553 """ 554 if value is not None: 555 if len(value) < 1: 556 raise ValueError("The sourceDir parameter must be a non-empty string.") 557 self._sourceDir = value
    558
    559 - def _getSourceDir(self):
    560 """ 561 Property target used to get the sourceDir parameter. 562 """ 563 return self._sourceDir
    564
    565 - def _setS3BucketUrl(self, value):
    566 """ 567 Property target used to set the s3BucketUrl parameter. 568 """ 569 if value is not None: 570 if len(value) < 1: 571 raise ValueError("The s3BucketUrl parameter must be a non-empty string.") 572 self._s3BucketUrl = value
    573
    574 - def _getS3BucketUrl(self):
    575 """ 576 Property target used to get the s3BucketUrl parameter. 577 """ 578 return self._s3BucketUrl
    579 580 help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") 581 version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") 582 verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") 583 quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") 584 logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") 585 owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") 586 mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") 587 output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") 588 debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") 589 stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") 590 diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") 591 verifyOnly = property(_getVerifyOnly, _setVerifyOnly, None, "Command-line verifyOnly (C{-v,--verifyOnly}) flag.") 592 ignoreWarnings = property(_getIgnoreWarnings, _setIgnoreWarnings, None, "Command-line ignoreWarnings (C{-w,--ignoreWarnings}) flag.") 593 sourceDir = property(_getSourceDir, _setSourceDir, None, "Command-line sourceDir, source of sync.") 594 s3BucketUrl = property(_getS3BucketUrl, _setS3BucketUrl, None, "Command-line s3BucketUrl, target of sync.") 595 596 597 ################## 598 # Utility methods 599 ################## 600
    601 - def validate(self):
    602 """ 603 Validates command-line options represented by the object. 604 605 Unless C{--help} or C{--version} are supplied, at least one action must 606 be specified. Other validations (as for allowed values for particular 607 options) will be taken care of at assignment time by the properties 608 functionality. 609 610 @note: The command line format is specified by the L{_usage} function. 611 Call L{_usage} to see a usage statement for the cback script. 612 613 @raise ValueError: If one of the validations fails. 614 """ 615 if not self.help and not self.version and not self.diagnostics: 616 if self.sourceDir is None or self.s3BucketUrl is None: 617 raise ValueError("Source directory and S3 bucket URL are both required.")
    618
    619 - def buildArgumentList(self, validate=True):
    620 """ 621 Extracts options into a list of command line arguments. 622 623 The original order of the various arguments (if, indeed, the object was 624 initialized with a command-line) is not preserved in this generated 625 argument list. Besides that, the argument list is normalized to use the 626 long option names (i.e. --version rather than -V). The resulting list 627 will be suitable for passing back to the constructor in the 628 C{argumentList} parameter. Unlike L{buildArgumentString}, string 629 arguments are not quoted here, because there is no need for it. 630 631 Unless the C{validate} parameter is C{False}, the L{Options.validate} 632 method will be called (with its default arguments) against the 633 options before extracting the command line. If the options are not valid, 634 then an argument list will not be extracted. 635 636 @note: It is strongly suggested that the C{validate} option always be set 637 to C{True} (the default) unless there is a specific need to extract an 638 invalid command line. 639 640 @param validate: Validate the options before extracting the command line. 641 @type validate: Boolean true/false. 642 643 @return: List representation of command-line arguments. 644 @raise ValueError: If options within the object are invalid. 645 """ 646 if validate: 647 self.validate() 648 argumentList = [] 649 if self._help: 650 argumentList.append("--help") 651 if self.version: 652 argumentList.append("--version") 653 if self.verbose: 654 argumentList.append("--verbose") 655 if self.quiet: 656 argumentList.append("--quiet") 657 if self.logfile is not None: 658 argumentList.append("--logfile") 659 argumentList.append(self.logfile) 660 if self.owner is not None: 661 argumentList.append("--owner") 662 argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) 663 if self.mode is not None: 664 argumentList.append("--mode") 665 argumentList.append("%o" % self.mode) 666 if self.output: 667 argumentList.append("--output") 668 if self.debug: 669 argumentList.append("--debug") 670 if self.stacktrace: 671 argumentList.append("--stack") 672 if self.diagnostics: 673 argumentList.append("--diagnostics") 674 if self.verifyOnly: 675 argumentList.append("--verifyOnly") 676 if self.ignoreWarnings: 677 argumentList.append("--ignoreWarnings") 678 if self.sourceDir is not None: 679 argumentList.append(self.sourceDir) 680 if self.s3BucketUrl is not None: 681 argumentList.append(self.s3BucketUrl) 682 return argumentList
    683
    684 - def buildArgumentString(self, validate=True):
    685 """ 686 Extracts options into a string of command-line arguments. 687 688 The original order of the various arguments (if, indeed, the object was 689 initialized with a command-line) is not preserved in this generated 690 argument string. Besides that, the argument string is normalized to use 691 the long option names (i.e. --version rather than -V) and to quote all 692 string arguments with double quotes (C{"}). The resulting string will be 693 suitable for passing back to the constructor in the C{argumentString} 694 parameter. 695 696 Unless the C{validate} parameter is C{False}, the L{Options.validate} 697 method will be called (with its default arguments) against the options 698 before extracting the command line. If the options are not valid, then 699 an argument string will not be extracted. 700 701 @note: It is strongly suggested that the C{validate} option always be set 702 to C{True} (the default) unless there is a specific need to extract an 703 invalid command line. 704 705 @param validate: Validate the options before extracting the command line. 706 @type validate: Boolean true/false. 707 708 @return: String representation of command-line arguments. 709 @raise ValueError: If options within the object are invalid. 710 """ 711 if validate: 712 self.validate() 713 argumentString = "" 714 if self._help: 715 argumentString += "--help " 716 if self.version: 717 argumentString += "--version " 718 if self.verbose: 719 argumentString += "--verbose " 720 if self.quiet: 721 argumentString += "--quiet " 722 if self.logfile is not None: 723 argumentString += "--logfile \"%s\" " % self.logfile 724 if self.owner is not None: 725 argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) 726 if self.mode is not None: 727 argumentString += "--mode %o " % self.mode 728 if self.output: 729 argumentString += "--output " 730 if self.debug: 731 argumentString += "--debug " 732 if self.stacktrace: 733 argumentString += "--stack " 734 if self.diagnostics: 735 argumentString += "--diagnostics " 736 if self.verifyOnly: 737 argumentString += "--verifyOnly " 738 if self.ignoreWarnings: 739 argumentString += "--ignoreWarnings " 740 if self.sourceDir is not None: 741 argumentString += "\"%s\" " % self.sourceDir 742 if self.s3BucketUrl is not None: 743 argumentString += "\"%s\" " % self.s3BucketUrl 744 return argumentString
    745
    746 - def _parseArgumentList(self, argumentList):
    747 """ 748 Internal method to parse a list of command-line arguments. 749 750 Most of the validation we do here has to do with whether the arguments 751 can be parsed and whether any values which exist are valid. We don't do 752 any validation as to whether required elements exist or whether elements 753 exist in the proper combination (instead, that's the job of the 754 L{validate} method). 755 756 For any of the options which supply parameters, if the option is 757 duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) 758 then the long switch is used. If the same option is duplicated with the 759 same switch (long or short), then the last entry on the command line is 760 used. 761 762 @param argumentList: List of arguments to a command. 763 @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} 764 765 @raise ValueError: If the argument list cannot be successfully parsed. 766 """ 767 switches = { } 768 opts, remaining = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) 769 for o, a in opts: # push the switches into a hash 770 switches[o] = a 771 if switches.has_key("-h") or switches.has_key("--help"): 772 self.help = True 773 if switches.has_key("-V") or switches.has_key("--version"): 774 self.version = True 775 if switches.has_key("-b") or switches.has_key("--verbose"): 776 self.verbose = True 777 if switches.has_key("-q") or switches.has_key("--quiet"): 778 self.quiet = True 779 if switches.has_key("-l"): 780 self.logfile = switches["-l"] 781 if switches.has_key("--logfile"): 782 self.logfile = switches["--logfile"] 783 if switches.has_key("-o"): 784 self.owner = switches["-o"].split(":", 1) 785 if switches.has_key("--owner"): 786 self.owner = switches["--owner"].split(":", 1) 787 if switches.has_key("-m"): 788 self.mode = switches["-m"] 789 if switches.has_key("--mode"): 790 self.mode = switches["--mode"] 791 if switches.has_key("-O") or switches.has_key("--output"): 792 self.output = True 793 if switches.has_key("-d") or switches.has_key("--debug"): 794 self.debug = True 795 if switches.has_key("-s") or switches.has_key("--stack"): 796 self.stacktrace = True 797 if switches.has_key("-D") or switches.has_key("--diagnostics"): 798 self.diagnostics = True 799 if switches.has_key("-v") or switches.has_key("--verifyOnly"): 800 self.verifyOnly = True 801 if switches.has_key("-w") or switches.has_key("--ignoreWarnings"): 802 self.ignoreWarnings = True 803 try: 804 (self.sourceDir, self.s3BucketUrl) = remaining 805 except ValueError: 806 pass
    807 808 809 ####################################################################### 810 # Public functions 811 ####################################################################### 812 813 ################# 814 # cli() function 815 ################# 816
    817 -def cli():
    818 """ 819 Implements the command-line interface for the C{cback-amazons3-sync} script. 820 821 Essentially, this is the "main routine" for the cback-amazons3-sync script. It does 822 all of the argument processing for the script, and then also implements the 823 tool functionality. 824 825 This function looks pretty similiar to C{CedarBackup2.cli.cli()}. It's not 826 easy to refactor this code to make it reusable and also readable, so I've 827 decided to just live with the duplication. 828 829 A different error code is returned for each type of failure: 830 831 - C{1}: The Python interpreter version is < 2.7 832 - C{2}: Error processing command-line arguments 833 - C{3}: Error configuring logging 834 - C{5}: Backup was interrupted with a CTRL-C or similar 835 - C{6}: Error executing other parts of the script 836 837 @note: This script uses print rather than logging to the INFO level, because 838 it is interactive. Underlying Cedar Backup functionality uses the logging 839 mechanism exclusively. 840 841 @return: Error code as described above. 842 """ 843 try: 844 if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 7]: 845 sys.stderr.write("Python 2 version 2.7 or greater required.\n") 846 return 1 847 except: 848 # sys.version_info isn't available before 2.0 849 sys.stderr.write("Python 2 version 2.7 or greater required.\n") 850 return 1 851 852 try: 853 options = Options(argumentList=sys.argv[1:]) 854 except Exception, e: 855 _usage() 856 sys.stderr.write(" *** Error: %s\n" % e) 857 return 2 858 859 if options.help: 860 _usage() 861 return 0 862 if options.version: 863 _version() 864 return 0 865 if options.diagnostics: 866 _diagnostics() 867 return 0 868 869 if options.stacktrace: 870 logfile = setupLogging(options) 871 else: 872 try: 873 logfile = setupLogging(options) 874 except Exception as e: 875 sys.stderr.write("Error setting up logging: %s\n" % e) 876 return 3 877 878 logger.info("Cedar Backup Amazon S3 sync run started.") 879 logger.info("Options were [%s]", options) 880 logger.info("Logfile is [%s]", logfile) 881 Diagnostics().logDiagnostics(method=logger.info) 882 883 if options.stacktrace: 884 _executeAction(options) 885 else: 886 try: 887 _executeAction(options) 888 except KeyboardInterrupt: 889 logger.error("Backup interrupted.") 890 logger.info("Cedar Backup Amazon S3 sync run completed with status 5.") 891 return 5 892 except Exception, e: 893 logger.error("Error executing backup: %s", e) 894 logger.info("Cedar Backup Amazon S3 sync run completed with status 6.") 895 return 6 896 897 logger.info("Cedar Backup Amazon S3 sync run completed with status 0.") 898 return 0
    899 900 901 ####################################################################### 902 # Utility functions 903 ####################################################################### 904 905 #################### 906 # _usage() function 907 #################### 908
    909 -def _usage(fd=sys.stderr):
    910 """ 911 Prints usage information for the cback-amazons3-sync script. 912 @param fd: File descriptor used to print information. 913 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 914 """ 915 fd.write("\n") 916 fd.write(" Usage: cback-amazons3-sync [switches] sourceDir s3bucketUrl\n") 917 fd.write("\n") 918 fd.write(" Cedar Backup Amazon S3 sync tool.\n") 919 fd.write("\n") 920 fd.write(" This Cedar Backup utility synchronizes a local directory to an Amazon S3\n") 921 fd.write(" bucket. After the sync is complete, a validation step is taken. An\n") 922 fd.write(" error is reported if the contents of the bucket do not match the\n") 923 fd.write(" source directory, or if the indicated size for any file differs.\n") 924 fd.write(" This tool is a wrapper over the AWS CLI command-line tool.\n") 925 fd.write("\n") 926 fd.write(" The following arguments are required:\n") 927 fd.write("\n") 928 fd.write(" sourceDir The local source directory on disk (must exist)\n") 929 fd.write(" s3BucketUrl The URL to the target Amazon S3 bucket\n") 930 fd.write("\n") 931 fd.write(" The following switches are accepted:\n") 932 fd.write("\n") 933 fd.write(" -h, --help Display this usage/help listing\n") 934 fd.write(" -V, --version Display version information\n") 935 fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") 936 fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") 937 fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) 938 fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) 939 fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) 940 fd.write(" -O, --output Record some sub-command (i.e. aws) output to the log\n") 941 fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") 942 fd.write(" -s, --stack Dump Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! 943 fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") 944 fd.write(" -v, --verifyOnly Only verify the S3 bucket contents, do not make changes\n") 945 fd.write(" -w, --ignoreWarnings Ignore warnings about problematic filename encodings\n") 946 fd.write("\n") 947 fd.write(" Typical usage would be something like:\n") 948 fd.write("\n") 949 fd.write(" cback-amazons3-sync /home/myuser s3://example.com-backup/myuser\n") 950 fd.write("\n") 951 fd.write(" This will sync the contents of /home/myuser into the indicated bucket.\n") 952 fd.write("\n")
    953 954 955 ###################### 956 # _version() function 957 ###################### 958
    959 -def _version(fd=sys.stdout):
    960 """ 961 Prints version information for the cback script. 962 @param fd: File descriptor used to print information. 963 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 964 """ 965 fd.write("\n") 966 fd.write(" Cedar Backup Amazon S3 sync tool.\n") 967 fd.write(" Included with Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) 968 fd.write("\n") 969 fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) 970 fd.write(" See CREDITS for a list of included code and other contributors.\n") 971 fd.write(" This is free software; there is NO warranty. See the\n") 972 fd.write(" GNU General Public License version 2 for copying conditions.\n") 973 fd.write("\n") 974 fd.write(" Use the --help option for usage information.\n") 975 fd.write("\n")
    976 977 978 ########################## 979 # _diagnostics() function 980 ########################## 981
    982 -def _diagnostics(fd=sys.stdout):
    983 """ 984 Prints runtime diagnostics information. 985 @param fd: File descriptor used to print information. 986 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 987 """ 988 fd.write("\n") 989 fd.write("Diagnostics:\n") 990 fd.write("\n") 991 Diagnostics().printDiagnostics(fd=fd, prefix=" ") 992 fd.write("\n")
    993 994 995 ############################ 996 # _executeAction() function 997 ############################ 998
    999 -def _executeAction(options):
    1000 """ 1001 Implements the guts of the cback-amazons3-sync tool. 1002 1003 @param options: Program command-line options. 1004 @type options: Options object. 1005 1006 @raise Exception: Under many generic error conditions 1007 """ 1008 sourceFiles = _buildSourceFiles(options.sourceDir) 1009 if not options.ignoreWarnings: 1010 _checkSourceFiles(options.sourceDir, sourceFiles) 1011 if not options.verifyOnly: 1012 _synchronizeBucket(options.sourceDir, options.s3BucketUrl) 1013 _verifyBucketContents(options.sourceDir, sourceFiles, options.s3BucketUrl)
    1014 1015 1016 ################################ 1017 # _buildSourceFiles() function 1018 ################################ 1019
    1020 -def _buildSourceFiles(sourceDir):
    1021 """ 1022 Build a list of files in a source directory 1023 @param sourceDir: Local source directory 1024 @return: FilesystemList with contents of source directory 1025 """ 1026 if not os.path.isdir(sourceDir): 1027 raise ValueError("Source directory does not exist on disk.") 1028 sourceFiles = FilesystemList() 1029 sourceFiles.addDirContents(sourceDir) 1030 return sourceFiles
    1031 1032 1033 ############################### 1034 # _checkSourceFiles() function 1035 ############################### 1036
    1037 -def _checkSourceFiles(sourceDir, sourceFiles):
    1038 """ 1039 Check source files, trying to guess which ones will have encoding problems. 1040 @param sourceDir: Local source directory 1041 @param sourceDir: Local source directory 1042 @raises ValueError: If a problem file is found 1043 @see U{http://opensourcehacker.com/2011/09/16/fix-linux-filename-encodings-with-python/} 1044 @see U{http://serverfault.com/questions/82821/how-to-tell-the-language-encoding-of-a-filename-on-linux} 1045 @see U{http://randysofia.com/2014/06/06/aws-cli-and-your-locale/} 1046 """ 1047 with warnings.catch_warnings(): 1048 warnings.simplefilter("ignore") # So we don't print unicode warnings from comparisons 1049 1050 encoding = Diagnostics().encoding 1051 1052 failed = False 1053 for entry in sourceFiles: 1054 result = chardet.detect(entry) 1055 source = entry.decode(result["encoding"]) 1056 try: 1057 target = source.encode(encoding) 1058 if source != target: 1059 logger.error("Inconsistent encoding for [%s]: got %s, but need %s", entry, result["encoding"], encoding) 1060 failed = True 1061 except UnicodeEncodeError: 1062 logger.error("Inconsistent encoding for [%s]: got %s, but need %s", entry, result["encoding"], encoding) 1063 failed = True 1064 1065 if not failed: 1066 logger.info("Completed checking source filename encoding (no problems found).") 1067 else: 1068 logger.error("Some filenames have inconsistent encodings and will likely cause sync problems.") 1069 logger.error("You may be able to fix this by setting a more sensible locale in your environment.") 1070 logger.error("Aternately, you can rename the problem files to be valid in the indicated locale.") 1071 logger.error("To ignore this warning and proceed anyway, use --ignoreWarnings") 1072 raise ValueError("Some filenames have inconsistent encodings and will likely cause sync problems.")
    1073 1074 1075 ################################ 1076 # _synchronizeBucket() function 1077 ################################ 1078
    1079 -def _synchronizeBucket(sourceDir, s3BucketUrl):
    1080 """ 1081 Synchronize a local directory to an Amazon S3 bucket. 1082 @param sourceDir: Local source directory 1083 @param s3BucketUrl: Target S3 bucket URL 1084 """ 1085 logger.info("Synchronizing local source directory up to Amazon S3.") 1086 args = [ "s3", "sync", sourceDir, s3BucketUrl, "--delete", "--recursive", ] 1087 result = executeCommand(AWS_COMMAND, args, returnOutput=False)[0] 1088 if result != 0: 1089 raise IOError("Error [%d] calling AWS CLI synchronize bucket." % result)
    1090 1091 1092 ################################### 1093 # _verifyBucketContents() function 1094 ################################### 1095
    1096 -def _verifyBucketContents(sourceDir, sourceFiles, s3BucketUrl):
    1097 """ 1098 Verify that a source directory is equivalent to an Amazon S3 bucket. 1099 @param sourceDir: Local source directory 1100 @param sourceFiles: Filesystem list containing contents of source directory 1101 @param s3BucketUrl: Target S3 bucket URL 1102 """ 1103 # As of this writing, the documentation for the S3 API that we're using 1104 # below says that up to 1000 elements at a time are returned, and that we 1105 # have to manually handle pagination by looking for the IsTruncated element. 1106 # However, in practice, this is not true. I have been testing with 1107 # "aws-cli/1.4.4 Python/2.7.3 Linux/3.2.0-4-686-pae", installed through PIP. 1108 # No matter how many items exist in my bucket and prefix, I get back a 1109 # single JSON result. I've tested with buckets containing nearly 6000 1110 # elements. 1111 # 1112 # If I turn on debugging, it's clear that underneath, something in the API 1113 # is executing multiple list-object requests against AWS, and stiching 1114 # results together to give me back the final JSON result. The debug output 1115 # clearly incldues multiple requests, and each XML response (except for the 1116 # final one) contains <IsTruncated>true</IsTruncated>. 1117 # 1118 # This feature is not mentioned in the offical changelog for any of the 1119 # releases going back to 1.0.0. It appears to happen in the botocore 1120 # library, but I'll admit I can't actually find the code that implements it. 1121 # For now, all I can do is rely on this behavior and hope that the 1122 # documentation is out-of-date. I'm not going to write code that tries to 1123 # parse out IsTruncated if I can't actually test that code. 1124 1125 (bucket, prefix) = s3BucketUrl.replace("s3://", "").split("/", 1) 1126 1127 query = "Contents[].{Key: Key, Size: Size}" 1128 args = [ "s3api", "list-objects", "--bucket", bucket, "--prefix", prefix, "--query", query, ] 1129 (result, data) = executeCommand(AWS_COMMAND, args, returnOutput=True) 1130 if result != 0: 1131 raise IOError("Error [%d] calling AWS CLI verify bucket contents." % result) 1132 1133 contents = { } 1134 for entry in json.loads("".join(data)): 1135 key = entry["Key"].replace(prefix, "") 1136 size = long(entry["Size"]) 1137 contents[key] = size 1138 1139 failed = False 1140 for entry in sourceFiles: 1141 if os.path.isfile(entry): 1142 key = entry.replace(sourceDir, "") 1143 size = long(os.stat(entry).st_size) 1144 if not key in contents: 1145 logger.error("File was apparently not uploaded: [%s]", entry) 1146 failed = True 1147 else: 1148 if size != contents[key]: 1149 logger.error("File size differs [%s]: expected %s bytes but got %s bytes", entry, size, contents[key]) 1150 failed = True 1151 1152 if not failed: 1153 logger.info("Completed verifying Amazon S3 bucket contents (no problems found).") 1154 else: 1155 logger.error("There were differences between source directory and target S3 bucket.") 1156 raise ValueError("There were differences between source directory and target S3 bucket.")
    1157 1158 1159 ######################################################################### 1160 # Main routine 1161 ######################################################################## 1162 1163 if __name__ == "__main__": 1164 sys.exit(cli()) 1165

    CedarBackup2-2.26.5/doc/interface/module-tree.html0000664000175000017500000003003712642035643023403 0ustar pronovicpronovic00000000000000 Module Hierarchy
     
    [hide private]
    [frames] | no frames]
    [ Module Hierarchy | Class Hierarchy ]

    Module Hierarchy

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend-module.html0000664000175000017500000002026512642035643026162 0ustar pronovicpronovic00000000000000 CedarBackup2.extend
    Package CedarBackup2 :: Package extend
    [hide private]
    [frames] | no frames]

    Package extend

    source code

    Official Cedar Backup Extensions

    This package provides official Cedar Backup extensions. These are Cedar Backup actions that are not part of the "standard" set of Cedar Backup actions, but are officially supported along with Cedar Backup.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.subversion.LocalConfig-class.html0000664000175000017500000013172412642035644032363 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.LocalConfig
    Package CedarBackup2 :: Package extend :: Module subversion :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Subversion-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <subversion> configuration section as the next child of a parent.
    source code
     
    _setSubversion(self, value)
    Property target used to set the subversion configuration value.
    source code
     
    _getSubversion(self)
    Property target used to get the subversion configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseSubversion(parent)
    Parses a subversion configuration section.
    source code
     
    _parseRepositories(parent)
    Reads a list of Repository objects from immediately beneath the parent.
    source code
     
    _addRepository(xmlDom, parentNode, repository)
    Adds a repository container as the next child of a parent.
    source code
     
    _parseRepositoryDirs(parent)
    Reads a list of RepositoryDir objects from immediately beneath the parent.
    source code
     
    _parseExclusions(parentNode)
    Reads exclusions data from immediately beneath the parent.
    source code
     
    _addRepositoryDir(xmlDom, parentNode, repositoryDir)
    Adds a repository dir container as the next child of a parent.
    source code
    Properties [hide private]
      subversion
    Subversion configuration in terms of a SubversionConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Subversion configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry.

    Each repository must contain a repository path, and then must be either able to take collect mode and compress mode configuration from the parent SubversionConfig object, or must set each value on its own.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <subversion> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      collectMode    //cb_config/subversion/collectMode
      compressMode   //cb_config/subversion/compressMode
    

    We also add groups of the following items, one list element per item:

      repository     //cb_config/subversion/repository
      repository_dir //cb_config/subversion/repository_dir
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setSubversion(self, value)

    source code 

    Property target used to set the subversion configuration value. If not None, the value must be a SubversionConfig object.

    Raises:
    • ValueError - If the value is not a SubversionConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the subversion configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseSubversion(parent)
    Static Method

    source code 

    Parses a subversion configuration section.

    We read the following individual fields:

      collectMode    //cb_config/subversion/collect_mode
      compressMode   //cb_config/subversion/compress_mode
    

    We also read groups of the following item, one list element per item:

      repositories    //cb_config/subversion/repository
      repository_dirs //cb_config/subversion/repository_dir
    

    The repositories are parsed by _parseRepositories, and the repository dirs are parsed by _parseRepositoryDirs.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    SubversionConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseRepositories(parent)
    Static Method

    source code 

    Reads a list of Repository objects from immediately beneath the parent.

    We read the following individual fields:

      repositoryType          type
      repositoryPath          abs_path
      collectMode             collect_mode
      compressMode            compess_mode
    

    The type field is optional, and its value is kept around only for reference.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of Repository objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _addRepository(xmlDom, parentNode, repository)
    Static Method

    source code 

    Adds a repository container as the next child of a parent.

    We add the following fields to the document:

      repositoryType          repository/type
      repositoryPath          repository/abs_path
      collectMode             repository/collect_mode
      compressMode            repository/compress_mode
    

    The <repository> node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository in the SubversionConfig object.

    If repository is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • repository - Repository to be added to the document.

    _parseRepositoryDirs(parent)
    Static Method

    source code 

    Reads a list of RepositoryDir objects from immediately beneath the parent.

    We read the following individual fields:

      repositoryType          type
      directoryPath           abs_path
      collectMode             collect_mode
      compressMode            compess_mode
    

    We also read groups of the following items, one list element per item:

      relativeExcludePaths    exclude/rel_path
      excludePatterns         exclude/pattern
    

    The exclusions are parsed by _parseExclusions.

    The type field is optional, and its value is kept around only for reference.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of RepositoryDir objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExclusions(parentNode)
    Static Method

    source code 

    Reads exclusions data from immediately beneath the parent.

    We read groups of the following items, one list element per item:

      relative    exclude/rel_path
      patterns    exclude/pattern
    

    If there are none of some pattern (i.e. no relative path items) then None will be returned for that item in the tuple.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (relative, patterns) exclusions.

    _addRepositoryDir(xmlDom, parentNode, repositoryDir)
    Static Method

    source code 

    Adds a repository dir container as the next child of a parent.

    We add the following fields to the document:

      repositoryType          repository_dir/type
      directoryPath           repository_dir/abs_path
      collectMode             repository_dir/collect_mode
      compressMode            repository_dir/compress_mode
    

    We also add groups of the following items, one list element per item:

      relativeExcludePaths    dir/exclude/rel_path
      excludePatterns         dir/exclude/pattern
    

    The <repository_dir> node itself is created as the next child of the parent node. This method only adds one repository node. The parent must loop for each repository dir in the SubversionConfig object.

    If repositoryDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • repositoryDir - Repository dir to be added to the document.

    Property Details [hide private]

    subversion

    Subversion configuration in terms of a SubversionConfig object.

    Get Method:
    _getSubversion(self) - Property target used to get the subversion configuration value.
    Set Method:
    _setSubversion(self, value) - Property target used to set the subversion configuration value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mbox.LocalConfig-class.html0000664000175000017500000013006712642035644031130 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox.LocalConfig
    Package CedarBackup2 :: Package extend :: Module mbox :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit Mbox-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds an <mbox> configuration section as the next child of a parent.
    source code
     
    _setMbox(self, value)
    Property target used to set the mbox configuration value.
    source code
     
    _getMbox(self)
    Property target used to get the mbox configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseMbox(parent)
    Parses an mbox configuration section.
    source code
     
    _parseMboxFiles(parent)
    Reads a list of MboxFile objects from immediately beneath the parent.
    source code
     
    _parseMboxDirs(parent)
    Reads a list of MboxDir objects from immediately beneath the parent.
    source code
     
    _parseExclusions(parentNode)
    Reads exclusions data from immediately beneath the parent.
    source code
     
    _addMboxFile(xmlDom, parentNode, mboxFile)
    Adds an mbox file container as the next child of a parent.
    source code
     
    _addMboxDir(xmlDom, parentNode, mboxDir)
    Adds an mbox directory container as the next child of a parent.
    source code
    Properties [hide private]
      mbox
    Mbox configuration in terms of a MboxConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Mbox configuration must be filled in. Within that, the collect mode and compress mode are both optional, but the list of repositories must contain at least one entry.

    Each configured file or directory must contain an absolute path, and then must be either able to take collect mode and compress mode configuration from the parent MboxConfig object, or must set each value on its own.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds an <mbox> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      collectMode    //cb_config/mbox/collectMode
      compressMode   //cb_config/mbox/compressMode
    

    We also add groups of the following items, one list element per item:

      mboxFiles      //cb_config/mbox/file
      mboxDirs       //cb_config/mbox/dir
    

    The mbox files and mbox directories are added by _addMboxFile and _addMboxDir.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setMbox(self, value)

    source code 

    Property target used to set the mbox configuration value. If not None, the value must be a MboxConfig object.

    Raises:
    • ValueError - If the value is not a MboxConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the mbox configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseMbox(parent)
    Static Method

    source code 

    Parses an mbox configuration section.

    We read the following individual fields:

      collectMode    //cb_config/mbox/collect_mode
      compressMode   //cb_config/mbox/compress_mode
    

    We also read groups of the following item, one list element per item:

      mboxFiles      //cb_config/mbox/file
      mboxDirs       //cb_config/mbox/dir
    

    The mbox files are parsed by _parseMboxFiles and the mbox directories are parsed by _parseMboxDirs.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    MboxConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseMboxFiles(parent)
    Static Method

    source code 

    Reads a list of MboxFile objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             collect_mode
      compressMode            compess_mode
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of MboxFile objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseMboxDirs(parent)
    Static Method

    source code 

    Reads a list of MboxDir objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             collect_mode
      compressMode            compess_mode
    

    We also read groups of the following items, one list element per item:

      relativeExcludePaths    exclude/rel_path
      excludePatterns         exclude/pattern
    

    The exclusions are parsed by _parseExclusions.

    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    List of MboxDir objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExclusions(parentNode)
    Static Method

    source code 

    Reads exclusions data from immediately beneath the parent.

    We read groups of the following items, one list element per item:

      relative    exclude/rel_path
      patterns    exclude/pattern
    

    If there are none of some pattern (i.e. no relative path items) then None will be returned for that item in the tuple.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (relative, patterns) exclusions.

    _addMboxFile(xmlDom, parentNode, mboxFile)
    Static Method

    source code 

    Adds an mbox file container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            file/abs_path
      collectMode             file/collect_mode
      compressMode            file/compress_mode
    

    The <file> node itself is created as the next child of the parent node. This method only adds one mbox file node. The parent must loop for each mbox file in the MboxConfig object.

    If mboxFile is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • mboxFile - MboxFile to be added to the document.

    _addMboxDir(xmlDom, parentNode, mboxDir)
    Static Method

    source code 

    Adds an mbox directory container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      collectMode             dir/collect_mode
      compressMode            dir/compress_mode
    

    We also add groups of the following items, one list element per item:

      relativeExcludePaths    dir/exclude/rel_path
      excludePatterns         dir/exclude/pattern
    

    The <dir> node itself is created as the next child of the parent node. This method only adds one mbox directory node. The parent must loop for each mbox directory in the MboxConfig object.

    If mboxDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.
    • mboxDir - MboxDir to be added to the document.

    Property Details [hide private]

    mbox

    Mbox configuration in terms of a MboxConfig object.

    Get Method:
    _getMbox(self) - Property target used to get the mbox configuration value.
    Set Method:
    _setMbox(self, value) - Property target used to set the mbox configuration value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.StoreConfig-class.html0000664000175000017500000021336112642035644030203 0ustar pronovicpronovic00000000000000 CedarBackup2.config.StoreConfig
    Package CedarBackup2 :: Module config :: Class StoreConfig
    [hide private]
    [frames] | no frames]

    Class StoreConfig

    source code

    object --+
             |
            StoreConfig
    

    Class representing a Cedar Backup store configuration.

    The following restrictions exist on data in this class:

    • The source directory must be an absolute path.
    • The media type must be one of the values in VALID_MEDIA_TYPES.
    • The device type must be one of the values in VALID_DEVICE_TYPES.
    • The device path must be an absolute path.
    • The SCSI id, if provided, must be in the form specified by validateScsiId.
    • The drive speed must be an integer >= 1
    • The blanking behavior must be a BlankBehavior object
    • The refresh media delay must be an integer >= 0
    • The eject delay must be an integer >= 0

    Note that although the blanking factor must be a positive floating point number, it is stored as a string. This is done so that we can losslessly go back and forth between XML and object representations of configuration.

    Instance Methods [hide private]
     
    __init__(self, sourceDir=None, mediaType=None, deviceType=None, devicePath=None, deviceScsiId=None, driveSpeed=None, checkData=False, warnMidnite=False, noEject=False, checkMedia=False, blankBehavior=None, refreshMediaDelay=None, ejectDelay=None)
    Constructor for the StoreConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setSourceDir(self, value)
    Property target used to set the source directory.
    source code
     
    _getSourceDir(self)
    Property target used to get the source directory.
    source code
     
    _setMediaType(self, value)
    Property target used to set the media type.
    source code
     
    _getMediaType(self)
    Property target used to get the media type.
    source code
     
    _setDeviceType(self, value)
    Property target used to set the device type.
    source code
     
    _getDeviceType(self)
    Property target used to get the device type.
    source code
     
    _setDevicePath(self, value)
    Property target used to set the device path.
    source code
     
    _getDevicePath(self)
    Property target used to get the device path.
    source code
     
    _setDeviceScsiId(self, value)
    Property target used to set the SCSI id The SCSI id must be valid per validateScsiId.
    source code
     
    _getDeviceScsiId(self)
    Property target used to get the SCSI id.
    source code
     
    _setDriveSpeed(self, value)
    Property target used to set the drive speed.
    source code
     
    _getDriveSpeed(self)
    Property target used to get the drive speed.
    source code
     
    _setCheckData(self, value)
    Property target used to set the check data flag.
    source code
     
    _getCheckData(self)
    Property target used to get the check data flag.
    source code
     
    _setCheckMedia(self, value)
    Property target used to set the check media flag.
    source code
     
    _getCheckMedia(self)
    Property target used to get the check media flag.
    source code
     
    _setWarnMidnite(self, value)
    Property target used to set the midnite warning flag.
    source code
     
    _getWarnMidnite(self)
    Property target used to get the midnite warning flag.
    source code
     
    _setNoEject(self, value)
    Property target used to set the no-eject flag.
    source code
     
    _getNoEject(self)
    Property target used to get the no-eject flag.
    source code
     
    _setBlankBehavior(self, value)
    Property target used to set blanking behavior configuration.
    source code
     
    _getBlankBehavior(self)
    Property target used to get the blanking behavior configuration.
    source code
     
    _setRefreshMediaDelay(self, value)
    Property target used to set the refreshMediaDelay.
    source code
     
    _getRefreshMediaDelay(self)
    Property target used to get the action refreshMediaDelay.
    source code
     
    _setEjectDelay(self, value)
    Property target used to set the ejectDelay.
    source code
     
    _getEjectDelay(self)
    Property target used to get the action ejectDelay.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      sourceDir
    Directory whose contents should be written to media.
      mediaType
    Type of the media (see notes above).
      deviceType
    Type of the device (optional, see notes above).
      devicePath
    Filesystem device name for writer device.
      deviceScsiId
    SCSI id for writer device (optional, see notes above).
      driveSpeed
    Speed of the drive.
      checkData
    Whether resulting image should be validated.
      checkMedia
    Whether media should be checked before being written to.
      warnMidnite
    Whether to generate warnings for crossing midnite.
      noEject
    Indicates that the writer device should not be ejected.
      blankBehavior
    Controls optimized blanking behavior.
      refreshMediaDelay
    Delay, in seconds, to add after refreshing media.
      ejectDelay
    Delay, in seconds, to add after ejecting media before closing the tray

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, sourceDir=None, mediaType=None, deviceType=None, devicePath=None, deviceScsiId=None, driveSpeed=None, checkData=False, warnMidnite=False, noEject=False, checkMedia=False, blankBehavior=None, refreshMediaDelay=None, ejectDelay=None)
    (Constructor)

    source code 

    Constructor for the StoreConfig class.

    Parameters:
    • sourceDir - Directory whose contents should be written to media.
    • mediaType - Type of the media (see notes above).
    • deviceType - Type of the device (optional, see notes above).
    • devicePath - Filesystem device name for writer device, i.e. /dev/cdrw.
    • deviceScsiId - SCSI id for writer device, i.e. [<method>:]scsibus,target,lun.
    • driveSpeed - Speed of the drive, i.e. 2 for 2x drive, etc.
    • checkData - Whether resulting image should be validated.
    • checkMedia - Whether media should be checked before being written to.
    • warnMidnite - Whether to generate warnings for crossing midnite.
    • noEject - Indicates that the writer device should not be ejected.
    • blankBehavior - Controls optimized blanking behavior.
    • refreshMediaDelay - Delay, in seconds, to add after refreshing media
    • ejectDelay - Delay, in seconds, to add after ejecting media before closing the tray
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setSourceDir(self, value)

    source code 

    Property target used to set the source directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setMediaType(self, value)

    source code 

    Property target used to set the media type. The value must be one of VALID_MEDIA_TYPES.

    Raises:
    • ValueError - If the value is not valid.

    _setDeviceType(self, value)

    source code 

    Property target used to set the device type. The value must be one of VALID_DEVICE_TYPES.

    Raises:
    • ValueError - If the value is not valid.

    _setDevicePath(self, value)

    source code 

    Property target used to set the device path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setDeviceScsiId(self, value)

    source code 

    Property target used to set the SCSI id The SCSI id must be valid per validateScsiId.

    Raises:
    • ValueError - If the value is not valid.

    _setDriveSpeed(self, value)

    source code 

    Property target used to set the drive speed. The drive speed must be valid per validateDriveSpeed.

    Raises:
    • ValueError - If the value is not valid.

    _setCheckData(self, value)

    source code 

    Property target used to set the check data flag. No validations, but we normalize the value to True or False.

    _setCheckMedia(self, value)

    source code 

    Property target used to set the check media flag. No validations, but we normalize the value to True or False.

    _setWarnMidnite(self, value)

    source code 

    Property target used to set the midnite warning flag. No validations, but we normalize the value to True or False.

    _setNoEject(self, value)

    source code 

    Property target used to set the no-eject flag. No validations, but we normalize the value to True or False.

    _setBlankBehavior(self, value)

    source code 

    Property target used to set blanking behavior configuration. If not None, the value must be a BlankBehavior object.

    Raises:
    • ValueError - If the value is not a BlankBehavior

    _setRefreshMediaDelay(self, value)

    source code 

    Property target used to set the refreshMediaDelay. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setEjectDelay(self, value)

    source code 

    Property target used to set the ejectDelay. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    sourceDir

    Directory whose contents should be written to media.

    Get Method:
    _getSourceDir(self) - Property target used to get the source directory.
    Set Method:
    _setSourceDir(self, value) - Property target used to set the source directory.

    mediaType

    Type of the media (see notes above).

    Get Method:
    _getMediaType(self) - Property target used to get the media type.
    Set Method:
    _setMediaType(self, value) - Property target used to set the media type.

    deviceType

    Type of the device (optional, see notes above).

    Get Method:
    _getDeviceType(self) - Property target used to get the device type.
    Set Method:
    _setDeviceType(self, value) - Property target used to set the device type.

    devicePath

    Filesystem device name for writer device.

    Get Method:
    _getDevicePath(self) - Property target used to get the device path.
    Set Method:
    _setDevicePath(self, value) - Property target used to set the device path.

    deviceScsiId

    SCSI id for writer device (optional, see notes above).

    Get Method:
    _getDeviceScsiId(self) - Property target used to get the SCSI id.
    Set Method:
    _setDeviceScsiId(self, value) - Property target used to set the SCSI id The SCSI id must be valid per validateScsiId.

    driveSpeed

    Speed of the drive.

    Get Method:
    _getDriveSpeed(self) - Property target used to get the drive speed.
    Set Method:
    _setDriveSpeed(self, value) - Property target used to set the drive speed.

    checkData

    Whether resulting image should be validated.

    Get Method:
    _getCheckData(self) - Property target used to get the check data flag.
    Set Method:
    _setCheckData(self, value) - Property target used to set the check data flag.

    checkMedia

    Whether media should be checked before being written to.

    Get Method:
    _getCheckMedia(self) - Property target used to get the check media flag.
    Set Method:
    _setCheckMedia(self, value) - Property target used to set the check media flag.

    warnMidnite

    Whether to generate warnings for crossing midnite.

    Get Method:
    _getWarnMidnite(self) - Property target used to get the midnite warning flag.
    Set Method:
    _setWarnMidnite(self, value) - Property target used to set the midnite warning flag.

    noEject

    Indicates that the writer device should not be ejected.

    Get Method:
    _getNoEject(self) - Property target used to get the no-eject flag.
    Set Method:
    _setNoEject(self, value) - Property target used to set the no-eject flag.

    blankBehavior

    Controls optimized blanking behavior.

    Get Method:
    _getBlankBehavior(self) - Property target used to get the blanking behavior configuration.
    Set Method:
    _setBlankBehavior(self, value) - Property target used to set blanking behavior configuration.

    refreshMediaDelay

    Delay, in seconds, to add after refreshing media.

    Get Method:
    _getRefreshMediaDelay(self) - Property target used to get the action refreshMediaDelay.
    Set Method:
    _setRefreshMediaDelay(self, value) - Property target used to set the refreshMediaDelay.

    ejectDelay

    Delay, in seconds, to add after ejecting media before closing the tray

    Get Method:
    _getEjectDelay(self) - Property target used to get the action ejectDelay.
    Set Method:
    _setEjectDelay(self, value) - Property target used to set the ejectDelay.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.ObjectTypeList-class.html0000664000175000017500000004247012642035644030376 0ustar pronovicpronovic00000000000000 CedarBackup2.util.ObjectTypeList
    Package CedarBackup2 :: Module util :: Class ObjectTypeList
    [hide private]
    [frames] | no frames]

    Class ObjectTypeList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    ObjectTypeList
    

    Class representing a list containing only objects with a certain type.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list matches the type that is requested. The comparison uses the built-in isinstance, which should allow subclasses of of the requested type to be added to the list as well.

    The objectName value will be used in exceptions, i.e. "Item must be a CollectDir object." if objectName is "CollectDir".

    Instance Methods [hide private]
    new empty list
    __init__(self, objectType, objectName)
    Initializes a typed list for a particular type.
    source code
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, objectType, objectName)
    (Constructor)

    source code 

    Initializes a typed list for a particular type.

    Parameters:
    • objectType - Type that the list elements must match.
    • objectName - Short string containing the "name" of the type.
    Returns: new empty list
    Overrides: object.__init__

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item does not match requested type.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item does not match requested type.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item does not match requested type.
    Overrides: list.extend

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.PeersConfig-class.html0000664000175000017500000006202512642035644030164 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PeersConfig
    Package CedarBackup2 :: Module config :: Class PeersConfig
    [hide private]
    [frames] | no frames]

    Class PeersConfig

    source code

    object --+
             |
            PeersConfig
    

    Class representing Cedar Backup global peer configuration.

    This section contains a list of local and remote peers in a master's backup pool. The section is optional. If a master does not define this section, then all peers are unmanaged, and the stage configuration section must explicitly list any peer that is to be staged. If this section is configured, then peers may be managed or unmanaged, and the stage section peer configuration (if any) completely overrides this configuration.

    The following restrictions exist on data in this class:

    • The list of local peers must contain only LocalPeer objects
    • The list of remote peers must contain only RemotePeer objects

    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, localPeers=None, remotePeers=None)
    Constructor for the PeersConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    hasPeers(self)
    Indicates whether any peers are filled into this object.
    source code
     
    _setLocalPeers(self, value)
    Property target used to set the local peers list.
    source code
     
    _getLocalPeers(self)
    Property target used to get the local peers list.
    source code
     
    _setRemotePeers(self, value)
    Property target used to set the remote peers list.
    source code
     
    _getRemotePeers(self)
    Property target used to get the remote peers list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      localPeers
    List of local peers.
      remotePeers
    List of remote peers.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, localPeers=None, remotePeers=None)
    (Constructor)

    source code 

    Constructor for the PeersConfig class.

    Parameters:
    • localPeers - List of local peers.
    • remotePeers - List of remote peers.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    hasPeers(self)

    source code 

    Indicates whether any peers are filled into this object.

    Returns:
    Boolean true if any local or remote peers are filled in, false otherwise.

    _setLocalPeers(self, value)

    source code 

    Property target used to set the local peers list. Either the value must be None or each element must be a LocalPeer.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setRemotePeers(self, value)

    source code 

    Property target used to set the remote peers list. Either the value must be None or each element must be a RemotePeer.

    Raises:
    • ValueError - If the value is not a RemotePeer

    Property Details [hide private]

    localPeers

    List of local peers.

    Get Method:
    _getLocalPeers(self) - Property target used to get the local peers list.
    Set Method:
    _setLocalPeers(self, value) - Property target used to set the local peers list.

    remotePeers

    List of remote peers.

    Get Method:
    _getRemotePeers(self) - Property target used to get the remote peers list.
    Set Method:
    _setRemotePeers(self, value) - Property target used to set the remote peers list.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.filesystem-pysrc.html0000664000175000017500000156056612642035645026752 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem
    Package CedarBackup2 :: Module filesystem
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.filesystem

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 2 (>= 2.7) 
      29  # Project  : Cedar Backup, release 2 
      30  # Purpose  : Provides filesystem-related objects. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides filesystem-related objects. 
      40  @sort: FilesystemList, BackupFileList, PurgeItemList 
      41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      42  """ 
      43   
      44   
      45  ######################################################################## 
      46  # Imported modules 
      47  ######################################################################## 
      48   
      49  # System modules 
      50  import os 
      51  import re 
      52  import math 
      53  import logging 
      54  import tarfile 
      55   
      56  # Cedar Backup modules 
      57  from CedarBackup2.knapsack import firstFit, bestFit, worstFit, alternateFit 
      58  from CedarBackup2.util import AbsolutePathList, UnorderedList, RegexList 
      59  from CedarBackup2.util import removeKeys, displayBytes, calculateFileAge, encodePath, dereferenceLink 
      60   
      61   
      62  ######################################################################## 
      63  # Module-wide variables 
      64  ######################################################################## 
      65   
      66  logger = logging.getLogger("CedarBackup2.log.filesystem") 
    
    67 68 69 ######################################################################## 70 # FilesystemList class definition 71 ######################################################################## 72 73 -class FilesystemList(list):
    74 75 ###################### 76 # Class documentation 77 ###################### 78 79 """ 80 Represents a list of filesystem items. 81 82 This is a generic class that represents a list of filesystem items. Callers 83 can add individual files or directories to the list, or can recursively add 84 the contents of a directory. The class also allows for up-front exclusions 85 in several forms (all files, all directories, all items matching a pattern, 86 all items whose basename matches a pattern, or all directories containing a 87 specific "ignore file"). Symbolic links are typically backed up 88 non-recursively, i.e. the link to a directory is backed up, but not the 89 contents of that link (we don't want to deal with recursive loops, etc.). 90 91 The custom methods such as L{addFile} will only add items if they exist on 92 the filesystem and do not match any exclusions that are already in place. 93 However, since a FilesystemList is a subclass of Python's standard list 94 class, callers can also add items to the list in the usual way, using 95 methods like C{append()} or C{insert()}. No validations apply to items 96 added to the list in this way; however, many list-manipulation methods deal 97 "gracefully" with items that don't exist in the filesystem, often by 98 ignoring them. 99 100 Once a list has been created, callers can remove individual items from the 101 list using standard methods like C{pop()} or C{remove()} or they can use 102 custom methods to remove specific types of entries or entries which match a 103 particular pattern. 104 105 @note: Regular expression patterns that apply to paths are assumed to be 106 bounded at front and back by the beginning and end of the string, i.e. they 107 are treated as if they begin with C{^} and end with C{$}. This is true 108 whether we are matching a complete path or a basename. 109 110 @note: Some platforms, like Windows, do not support soft links. On those 111 platforms, the ignore-soft-links flag can be set, but it won't do any good 112 because the operating system never reports a file as a soft link. 113 114 @sort: __init__, addFile, addDir, addDirContents, removeFiles, removeDirs, 115 removeLinks, removeMatch, removeInvalid, normalize, 116 excludeFiles, excludeDirs, excludeLinks, excludePaths, 117 excludePatterns, excludeBasenamePatterns, ignoreFile 118 """ 119 120 121 ############## 122 # Constructor 123 ############## 124
    125 - def __init__(self):
    126 """Initializes a list with no configured exclusions.""" 127 list.__init__(self) 128 self._excludeFiles = False 129 self._excludeDirs = False 130 self._excludeLinks = False 131 self._excludePaths = None 132 self._excludePatterns = None 133 self._excludeBasenamePatterns = None 134 self._ignoreFile = None 135 self.excludeFiles = False 136 self.excludeLinks = False 137 self.excludeDirs = False 138 self.excludePaths = [] 139 self.excludePatterns = RegexList() 140 self.excludeBasenamePatterns = RegexList() 141 self.ignoreFile = None
    142 143 144 ############# 145 # Properties 146 ############# 147
    148 - def _setExcludeFiles(self, value):
    149 """ 150 Property target used to set the exclude files flag. 151 No validations, but we normalize the value to C{True} or C{False}. 152 """ 153 if value: 154 self._excludeFiles = True 155 else: 156 self._excludeFiles = False
    157
    158 - def _getExcludeFiles(self):
    159 """ 160 Property target used to get the exclude files flag. 161 """ 162 return self._excludeFiles
    163
    164 - def _setExcludeDirs(self, value):
    165 """ 166 Property target used to set the exclude directories flag. 167 No validations, but we normalize the value to C{True} or C{False}. 168 """ 169 if value: 170 self._excludeDirs = True 171 else: 172 self._excludeDirs = False
    173
    174 - def _getExcludeDirs(self):
    175 """ 176 Property target used to get the exclude directories flag. 177 """ 178 return self._excludeDirs
    179 189 195
    196 - def _setExcludePaths(self, value):
    197 """ 198 Property target used to set the exclude paths list. 199 A C{None} value is converted to an empty list. 200 Elements do not have to exist on disk at the time of assignment. 201 @raise ValueError: If any list element is not an absolute path. 202 """ 203 self._excludePaths = AbsolutePathList() 204 if value is not None: 205 self._excludePaths.extend(value)
    206
    207 - def _getExcludePaths(self):
    208 """ 209 Property target used to get the absolute exclude paths list. 210 """ 211 return self._excludePaths
    212
    213 - def _setExcludePatterns(self, value):
    214 """ 215 Property target used to set the exclude patterns list. 216 A C{None} value is converted to an empty list. 217 """ 218 self._excludePatterns = RegexList() 219 if value is not None: 220 self._excludePatterns.extend(value)
    221
    222 - def _getExcludePatterns(self):
    223 """ 224 Property target used to get the exclude patterns list. 225 """ 226 return self._excludePatterns
    227
    228 - def _setExcludeBasenamePatterns(self, value):
    229 """ 230 Property target used to set the exclude basename patterns list. 231 A C{None} value is converted to an empty list. 232 """ 233 self._excludeBasenamePatterns = RegexList() 234 if value is not None: 235 self._excludeBasenamePatterns.extend(value)
    236
    238 """ 239 Property target used to get the exclude basename patterns list. 240 """ 241 return self._excludeBasenamePatterns
    242
    243 - def _setIgnoreFile(self, value):
    244 """ 245 Property target used to set the ignore file. 246 The value must be a non-empty string if it is not C{None}. 247 @raise ValueError: If the value is an empty string. 248 """ 249 if value is not None: 250 if len(value) < 1: 251 raise ValueError("The ignore file must be a non-empty string.") 252 self._ignoreFile = value
    253
    254 - def _getIgnoreFile(self):
    255 """ 256 Property target used to get the ignore file. 257 """ 258 return self._ignoreFile
    259 260 excludeFiles = property(_getExcludeFiles, _setExcludeFiles, None, "Boolean indicating whether files should be excluded.") 261 excludeDirs = property(_getExcludeDirs, _setExcludeDirs, None, "Boolean indicating whether directories should be excluded.") 262 excludeLinks = property(_getExcludeLinks, _setExcludeLinks, None, "Boolean indicating whether soft links should be excluded.") 263 excludePaths = property(_getExcludePaths, _setExcludePaths, None, "List of absolute paths to be excluded.") 264 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, 265 "List of regular expression patterns (matching complete path) to be excluded.") 266 excludeBasenamePatterns = property(_getExcludeBasenamePatterns, _setExcludeBasenamePatterns, 267 None, "List of regular expression patterns (matching basename) to be excluded.") 268 ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Name of file which will cause directory contents to be ignored.") 269 270 271 ############## 272 # Add methods 273 ############## 274
    275 - def addFile(self, path):
    276 """ 277 Adds a file to the list. 278 279 The path must exist and must be a file or a link to an existing file. It 280 will be added to the list subject to any exclusions that are in place. 281 282 @param path: File path to be added to the list 283 @type path: String representing a path on disk 284 285 @return: Number of items added to the list. 286 287 @raise ValueError: If path is not a file or does not exist. 288 @raise ValueError: If the path could not be encoded properly. 289 """ 290 path = encodePath(path) 291 if not os.path.exists(path) or not os.path.isfile(path): 292 logger.debug("Path [%s] is not a file or does not exist on disk.", path) 293 raise ValueError("Path is not a file or does not exist on disk.") 294 if self.excludeLinks and os.path.islink(path): 295 logger.debug("Path [%s] is excluded based on excludeLinks.", path) 296 return 0 297 if self.excludeFiles: 298 logger.debug("Path [%s] is excluded based on excludeFiles.", path) 299 return 0 300 if path in self.excludePaths: 301 logger.debug("Path [%s] is excluded based on excludePaths.", path) 302 return 0 303 for pattern in self.excludePatterns: 304 pattern = encodePath(pattern) # use same encoding as filenames 305 if re.compile(r"^%s$" % pattern).match(path): # safe to assume all are valid due to RegexList 306 logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) 307 return 0 308 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList 309 pattern = encodePath(pattern) # use same encoding as filenames 310 if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): 311 logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) 312 return 0 313 self.append(path) 314 logger.debug("Added file to list: [%s]", path) 315 return 1
    316
    317 - def addDir(self, path):
    318 """ 319 Adds a directory to the list. 320 321 The path must exist and must be a directory or a link to an existing 322 directory. It will be added to the list subject to any exclusions that 323 are in place. The L{ignoreFile} does not apply to this method, only to 324 L{addDirContents}. 325 326 @param path: Directory path to be added to the list 327 @type path: String representing a path on disk 328 329 @return: Number of items added to the list. 330 331 @raise ValueError: If path is not a directory or does not exist. 332 @raise ValueError: If the path could not be encoded properly. 333 """ 334 path = encodePath(path) 335 path = normalizeDir(path) 336 if not os.path.exists(path) or not os.path.isdir(path): 337 logger.debug("Path [%s] is not a directory or does not exist on disk.", path) 338 raise ValueError("Path is not a directory or does not exist on disk.") 339 if self.excludeLinks and os.path.islink(path): 340 logger.debug("Path [%s] is excluded based on excludeLinks.", path) 341 return 0 342 if self.excludeDirs: 343 logger.debug("Path [%s] is excluded based on excludeDirs.", path) 344 return 0 345 if path in self.excludePaths: 346 logger.debug("Path [%s] is excluded based on excludePaths.", path) 347 return 0 348 for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList 349 pattern = encodePath(pattern) # use same encoding as filenames 350 if re.compile(r"^%s$" % pattern).match(path): 351 logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) 352 return 0 353 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList 354 pattern = encodePath(pattern) # use same encoding as filenames 355 if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): 356 logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) 357 return 0 358 self.append(path) 359 logger.debug("Added directory to list: [%s]", path) 360 return 1
    361
    362 - def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False):
    363 """ 364 Adds the contents of a directory to the list. 365 366 The path must exist and must be a directory or a link to a directory. 367 The contents of the directory (as well as the directory path itself) will 368 be recursively added to the list, subject to any exclusions that are in 369 place. If you only want the directory and its immediate contents to be 370 added, then pass in C{recursive=False}. 371 372 @note: If a directory's absolute path matches an exclude pattern or path, 373 or if the directory contains the configured ignore file, then the 374 directory and all of its contents will be recursively excluded from the 375 list. 376 377 @note: If the passed-in directory happens to be a soft link, it will be 378 recursed. However, the linkDepth parameter controls whether any soft 379 links I{within} the directory will be recursed. The link depth is 380 maximum depth of the tree at which soft links should be followed. So, a 381 depth of 0 does not follow any soft links, a depth of 1 follows only 382 links within the passed-in directory, a depth of 2 follows the links at 383 the next level down, etc. 384 385 @note: Any invalid soft links (i.e. soft links that point to 386 non-existent items) will be silently ignored. 387 388 @note: The L{excludeDirs} flag only controls whether any given directory 389 path itself is added to the list once it has been discovered. It does 390 I{not} modify any behavior related to directory recursion. 391 392 @note: If you call this method I{on a link to a directory} that link will 393 never be dereferenced (it may, however, be followed). 394 395 @param path: Directory path whose contents should be added to the list 396 @type path: String representing a path on disk 397 398 @param recursive: Indicates whether directory contents should be added recursively. 399 @type recursive: Boolean value 400 401 @param addSelf: Indicates whether the directory itself should be added to the list. 402 @type addSelf: Boolean value 403 404 @param linkDepth: Maximum depth of the tree at which soft links should be followed 405 @type linkDepth: Integer value, where zero means not to follow any soft links 406 407 @param dereference: Indicates whether soft links, if followed, should be dereferenced 408 @type dereference: Boolean value 409 410 @return: Number of items recursively added to the list 411 412 @raise ValueError: If path is not a directory or does not exist. 413 @raise ValueError: If the path could not be encoded properly. 414 """ 415 path = encodePath(path) 416 path = normalizeDir(path) 417 return self._addDirContentsInternal(path, addSelf, recursive, linkDepth, dereference)
    418
    419 - def _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False):
    420 """ 421 Internal implementation of C{addDirContents}. 422 423 This internal implementation exists due to some refactoring. Basically, 424 some subclasses have a need to add the contents of a directory, but not 425 the directory itself. This is different than the standard C{FilesystemList} 426 behavior and actually ends up making a special case out of the first 427 call in the recursive chain. Since I don't want to expose the modified 428 interface, C{addDirContents} ends up being wholly implemented in terms 429 of this method. 430 431 The linkDepth parameter controls whether soft links are followed when we 432 are adding the contents recursively. Any recursive calls reduce the 433 value by one. If the value zero or less, then soft links will just be 434 added as directories, but will not be followed. This means that links 435 are followed to a I{constant depth} starting from the top-most directory. 436 437 There is one difference between soft links and directories: soft links 438 that are added recursively are not placed into the list explicitly. This 439 is because if we do add the links recursively, the resulting tar file 440 gets a little confused (it has a link and a directory with the same 441 name). 442 443 @note: If you call this method I{on a link to a directory} that link will 444 never be dereferenced (it may, however, be followed). 445 446 @param path: Directory path whose contents should be added to the list. 447 @param includePath: Indicates whether to include the path as well as contents. 448 @param recursive: Indicates whether directory contents should be added recursively. 449 @param linkDepth: Depth of soft links that should be followed 450 @param dereference: Indicates whether soft links, if followed, should be dereferenced 451 452 @return: Number of items recursively added to the list 453 454 @raise ValueError: If path is not a directory or does not exist. 455 """ 456 added = 0 457 if not os.path.exists(path) or not os.path.isdir(path): 458 logger.debug("Path [%s] is not a directory or does not exist on disk.", path) 459 raise ValueError("Path is not a directory or does not exist on disk.") 460 if path in self.excludePaths: 461 logger.debug("Path [%s] is excluded based on excludePaths.", path) 462 return added 463 for pattern in self.excludePatterns: # safe to assume all are valid due to RegexList 464 pattern = encodePath(pattern) # use same encoding as filenames 465 if re.compile(r"^%s$" % pattern).match(path): 466 logger.debug("Path [%s] is excluded based on pattern [%s].", path, pattern) 467 return added 468 for pattern in self.excludeBasenamePatterns: # safe to assume all are valid due to RegexList 469 pattern = encodePath(pattern) # use same encoding as filenames 470 if re.compile(r"^%s$" % pattern).match(os.path.basename(path)): 471 logger.debug("Path [%s] is excluded based on basename pattern [%s].", path, pattern) 472 return added 473 if self.ignoreFile is not None and os.path.exists(os.path.join(path, self.ignoreFile)): 474 logger.debug("Path [%s] is excluded based on ignore file.", path) 475 return added 476 if includePath: 477 added += self.addDir(path) # could actually be excluded by addDir, yet 478 for entry in os.listdir(path): 479 entrypath = os.path.join(path, entry) 480 if os.path.isfile(entrypath): 481 if linkDepth > 0 and dereference: 482 derefpath = dereferenceLink(entrypath) 483 if derefpath != entrypath: 484 added += self.addFile(derefpath) 485 added += self.addFile(entrypath) 486 elif os.path.isdir(entrypath): 487 if os.path.islink(entrypath): 488 if recursive: 489 if linkDepth > 0: 490 newDepth = linkDepth - 1 491 if dereference: 492 derefpath = dereferenceLink(entrypath) 493 if derefpath != entrypath: 494 added += self._addDirContentsInternal(derefpath, True, recursive, newDepth, dereference) 495 added += self.addDir(entrypath) 496 else: 497 added += self._addDirContentsInternal(entrypath, False, recursive, newDepth, dereference) 498 else: 499 added += self.addDir(entrypath) 500 else: 501 added += self.addDir(entrypath) 502 else: 503 if recursive: 504 newDepth = linkDepth - 1 505 added += self._addDirContentsInternal(entrypath, True, recursive, newDepth, dereference) 506 else: 507 added += self.addDir(entrypath) 508 return added
    509 510 511 ################# 512 # Remove methods 513 ################# 514
    515 - def removeFiles(self, pattern=None):
    516 """ 517 Removes file entries from the list. 518 519 If C{pattern} is not passed in or is C{None}, then all file entries will 520 be removed from the list. Otherwise, only those file entries matching 521 the pattern will be removed. Any entry which does not exist on disk 522 will be ignored (use L{removeInvalid} to purge those entries). 523 524 This method might be fairly slow for large lists, since it must check the 525 type of each item in the list. If you know ahead of time that you want 526 to exclude all files, then you will be better off setting L{excludeFiles} 527 to C{True} before adding items to the list. 528 529 @param pattern: Regular expression pattern representing entries to remove 530 531 @return: Number of entries removed 532 @raise ValueError: If the passed-in pattern is not a valid regular expression. 533 """ 534 removed = 0 535 if pattern is None: 536 for entry in self[:]: 537 if os.path.exists(entry) and os.path.isfile(entry): 538 self.remove(entry) 539 logger.debug("Removed path [%s] from list.", entry) 540 removed += 1 541 else: 542 try: 543 pattern = encodePath(pattern) # use same encoding as filenames 544 compiled = re.compile(pattern) 545 except re.error: 546 raise ValueError("Pattern is not a valid regular expression.") 547 for entry in self[:]: 548 if os.path.exists(entry) and os.path.isfile(entry): 549 if compiled.match(entry): 550 self.remove(entry) 551 logger.debug("Removed path [%s] from list.", entry) 552 removed += 1 553 logger.debug("Removed a total of %d entries.", removed) 554 return removed
    555
    556 - def removeDirs(self, pattern=None):
    557 """ 558 Removes directory entries from the list. 559 560 If C{pattern} is not passed in or is C{None}, then all directory entries 561 will be removed from the list. Otherwise, only those directory entries 562 matching the pattern will be removed. Any entry which does not exist on 563 disk will be ignored (use L{removeInvalid} to purge those entries). 564 565 This method might be fairly slow for large lists, since it must check the 566 type of each item in the list. If you know ahead of time that you want 567 to exclude all directories, then you will be better off setting 568 L{excludeDirs} to C{True} before adding items to the list (note that this 569 will not prevent you from recursively adding the I{contents} of 570 directories). 571 572 @param pattern: Regular expression pattern representing entries to remove 573 574 @return: Number of entries removed 575 @raise ValueError: If the passed-in pattern is not a valid regular expression. 576 """ 577 removed = 0 578 if pattern is None: 579 for entry in self[:]: 580 if os.path.exists(entry) and os.path.isdir(entry): 581 self.remove(entry) 582 logger.debug("Removed path [%s] from list.", entry) 583 removed += 1 584 else: 585 try: 586 pattern = encodePath(pattern) # use same encoding as filenames 587 compiled = re.compile(pattern) 588 except re.error: 589 raise ValueError("Pattern is not a valid regular expression.") 590 for entry in self[:]: 591 if os.path.exists(entry) and os.path.isdir(entry): 592 if compiled.match(entry): 593 self.remove(entry) 594 logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) 595 removed += 1 596 logger.debug("Removed a total of %d entries.", removed) 597 return removed
    598 639
    640 - def removeMatch(self, pattern):
    641 """ 642 Removes from the list all entries matching a pattern. 643 644 This method removes from the list all entries which match the passed in 645 C{pattern}. Since there is no need to check the type of each entry, it 646 is faster to call this method than to call the L{removeFiles}, 647 L{removeDirs} or L{removeLinks} methods individually. If you know which 648 patterns you will want to remove ahead of time, you may be better off 649 setting L{excludePatterns} or L{excludeBasenamePatterns} before adding 650 items to the list. 651 652 @note: Unlike when using the exclude lists, the pattern here is I{not} 653 bounded at the front and the back of the string. You can use any pattern 654 you want. 655 656 @param pattern: Regular expression pattern representing entries to remove 657 658 @return: Number of entries removed. 659 @raise ValueError: If the passed-in pattern is not a valid regular expression. 660 """ 661 try: 662 pattern = encodePath(pattern) # use same encoding as filenames 663 compiled = re.compile(pattern) 664 except re.error: 665 raise ValueError("Pattern is not a valid regular expression.") 666 removed = 0 667 for entry in self[:]: 668 if compiled.match(entry): 669 self.remove(entry) 670 logger.debug("Removed path [%s] from list based on pattern [%s].", entry, pattern) 671 removed += 1 672 logger.debug("Removed a total of %d entries.", removed) 673 return removed
    674
    675 - def removeInvalid(self):
    676 """ 677 Removes from the list all entries that do not exist on disk. 678 679 This method removes from the list all entries which do not currently 680 exist on disk in some form. No attention is paid to whether the entries 681 are files or directories. 682 683 @return: Number of entries removed. 684 """ 685 removed = 0 686 for entry in self[:]: 687 if not os.path.exists(entry): 688 self.remove(entry) 689 logger.debug("Removed path [%s] from list.", entry) 690 removed += 1 691 logger.debug("Removed a total of %d entries.", removed) 692 return removed
    693 694 695 ################## 696 # Utility methods 697 ################## 698
    699 - def normalize(self):
    700 """Normalizes the list, ensuring that each entry is unique.""" 701 orig = len(self) 702 self.sort() 703 dups = filter(lambda x, self=self: self[x] == self[x+1], range(0, len(self) - 1)) # pylint: disable=W0110 704 items = map(lambda x, self=self: self[x], dups) # pylint: disable=W0110 705 map(self.remove, items) 706 new = len(self) 707 logger.debug("Completed normalizing list; removed %d items (%d originally, %d now).", new-orig, orig, new)
    708
    709 - def verify(self):
    710 """ 711 Verifies that all entries in the list exist on disk. 712 @return: C{True} if all entries exist, C{False} otherwise. 713 """ 714 for entry in self: 715 if not os.path.exists(entry): 716 logger.debug("Path [%s] is invalid; list is not valid.", entry) 717 return False 718 logger.debug("All entries in list are valid.") 719 return True
    720
    721 722 ######################################################################## 723 # SpanItem class definition 724 ######################################################################## 725 726 -class SpanItem(object): # pylint: disable=R0903
    727 """ 728 Item returned by L{BackupFileList.generateSpan}. 729 """
    730 - def __init__(self, fileList, size, capacity, utilization):
    731 """ 732 Create object. 733 @param fileList: List of files 734 @param size: Size (in bytes) of files 735 @param utilization: Utilization, as a percentage (0-100) 736 """ 737 self.fileList = fileList 738 self.size = size 739 self.capacity = capacity 740 self.utilization = utilization
    741
    742 743 ######################################################################## 744 # BackupFileList class definition 745 ######################################################################## 746 747 -class BackupFileList(FilesystemList): # pylint: disable=R0904
    748 749 ###################### 750 # Class documentation 751 ###################### 752 753 """ 754 List of files to be backed up. 755 756 A BackupFileList is a L{FilesystemList} containing a list of files to be 757 backed up. It only contains files, not directories (soft links are treated 758 like files). On top of the generic functionality provided by 759 L{FilesystemList}, this class adds functionality to keep a hash (checksum) 760 for each file in the list, and it also provides a method to calculate the 761 total size of the files in the list and a way to export the list into tar 762 form. 763 764 @sort: __init__, addDir, totalSize, generateSizeMap, generateDigestMap, 765 generateFitted, generateTarfile, removeUnchanged 766 """ 767 768 ############## 769 # Constructor 770 ############## 771
    772 - def __init__(self):
    773 """Initializes a list with no configured exclusions.""" 774 FilesystemList.__init__(self)
    775 776 777 ################################ 778 # Overridden superclass methods 779 ################################ 780
    781 - def addDir(self, path):
    782 """ 783 Adds a directory to the list. 784 785 Note that this class does not allow directories to be added by themselves 786 (a backup list contains only files). However, since links to directories 787 are technically files, we allow them to be added. 788 789 This method is implemented in terms of the superclass method, with one 790 additional validation: the superclass method is only called if the 791 passed-in path is both a directory and a link. All of the superclass's 792 existing validations and restrictions apply. 793 794 @param path: Directory path to be added to the list 795 @type path: String representing a path on disk 796 797 @return: Number of items added to the list. 798 799 @raise ValueError: If path is not a directory or does not exist. 800 @raise ValueError: If the path could not be encoded properly. 801 """ 802 path = encodePath(path) 803 path = normalizeDir(path) 804 if os.path.isdir(path) and not os.path.islink(path): 805 return 0 806 else: 807 return FilesystemList.addDir(self, path)
    808 809 810 ################## 811 # Utility methods 812 ################## 813
    814 - def totalSize(self):
    815 """ 816 Returns the total size among all files in the list. 817 Only files are counted. 818 Soft links that point at files are ignored. 819 Entries which do not exist on disk are ignored. 820 @return: Total size, in bytes 821 """ 822 total = 0.0 823 for entry in self: 824 if os.path.isfile(entry) and not os.path.islink(entry): 825 total += float(os.stat(entry).st_size) 826 return total
    827
    828 - def generateSizeMap(self):
    829 """ 830 Generates a mapping from file to file size in bytes. 831 The mapping does include soft links, which are listed with size zero. 832 Entries which do not exist on disk are ignored. 833 @return: Dictionary mapping file to file size 834 """ 835 table = { } 836 for entry in self: 837 if os.path.islink(entry): 838 table[entry] = 0.0 839 elif os.path.isfile(entry): 840 table[entry] = float(os.stat(entry).st_size) 841 return table
    842
    843 - def generateDigestMap(self, stripPrefix=None):
    844 """ 845 Generates a mapping from file to file digest. 846 847 Currently, the digest is an SHA hash, which should be pretty secure. In 848 the future, this might be a different kind of hash, but we guarantee that 849 the type of the hash will not change unless the library major version 850 number is bumped. 851 852 Entries which do not exist on disk are ignored. 853 854 Soft links are ignored. We would end up generating a digest for the file 855 that the soft link points at, which doesn't make any sense. 856 857 If C{stripPrefix} is passed in, then that prefix will be stripped from 858 each key when the map is generated. This can be useful in generating two 859 "relative" digest maps to be compared to one another. 860 861 @param stripPrefix: Common prefix to be stripped from paths 862 @type stripPrefix: String with any contents 863 864 @return: Dictionary mapping file to digest value 865 @see: L{removeUnchanged} 866 """ 867 table = { } 868 if stripPrefix is not None: 869 for entry in self: 870 if os.path.isfile(entry) and not os.path.islink(entry): 871 table[entry.replace(stripPrefix, "", 1)] = BackupFileList._generateDigest(entry) 872 else: 873 for entry in self: 874 if os.path.isfile(entry) and not os.path.islink(entry): 875 table[entry] = BackupFileList._generateDigest(entry) 876 return table
    877 878 @staticmethod
    879 - def _generateDigest(path):
    880 """ 881 Generates an SHA digest for a given file on disk. 882 883 The original code for this function used this simplistic implementation, 884 which requires reading the entire file into memory at once in order to 885 generate a digest value:: 886 887 sha.new(open(path).read()).hexdigest() 888 889 Not surprisingly, this isn't an optimal solution. The U{Simple file 890 hashing <http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/259109>} 891 Python Cookbook recipe describes how to incrementally generate a hash 892 value by reading in chunks of data rather than reading the file all at 893 once. The recipe relies on the the C{update()} method of the various 894 Python hashing algorithms. 895 896 In my tests using a 110 MB file on CD, the original implementation 897 requires 111 seconds. This implementation requires only 40-45 seconds, 898 which is a pretty substantial speed-up. 899 900 Experience shows that reading in around 4kB (4096 bytes) at a time yields 901 the best performance. Smaller reads are quite a bit slower, and larger 902 reads don't make much of a difference. The 4kB number makes me a little 903 suspicious, and I think it might be related to the size of a filesystem 904 read at the hardware level. However, I've decided to just hardcode 4096 905 until I have evidence that shows it's worthwhile making the read size 906 configurable. 907 908 @param path: Path to generate digest for. 909 910 @return: ASCII-safe SHA digest for the file. 911 @raise OSError: If the file cannot be opened. 912 """ 913 # pylint: disable=C0103,E1101 914 try: 915 import hashlib 916 s = hashlib.sha1() 917 except ImportError: 918 import sha 919 s = sha.new() 920 f = open(path, mode="rb") # in case platform cares about binary reads 921 readBytes = 4096 # see notes above 922 while readBytes > 0: 923 readString = f.read(readBytes) 924 s.update(readString) 925 readBytes = len(readString) 926 f.close() 927 digest = s.hexdigest() 928 logger.debug("Generated digest [%s] for file [%s].", digest, path) 929 return digest
    930
    931 - def generateFitted(self, capacity, algorithm="worst_fit"):
    932 """ 933 Generates a list of items that fit in the indicated capacity. 934 935 Sometimes, callers would like to include every item in a list, but are 936 unable to because not all of the items fit in the space available. This 937 method returns a copy of the list, containing only the items that fit in 938 a given capacity. A copy is returned so that we don't lose any 939 information if for some reason the fitted list is unsatisfactory. 940 941 The fitting is done using the functions in the knapsack module. By 942 default, the first fit algorithm is used, but you can also choose 943 from best fit, worst fit and alternate fit. 944 945 @param capacity: Maximum capacity among the files in the new list 946 @type capacity: Integer, in bytes 947 948 @param algorithm: Knapsack (fit) algorithm to use 949 @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" 950 951 @return: Copy of list with total size no larger than indicated capacity 952 @raise ValueError: If the algorithm is invalid. 953 """ 954 table = self._getKnapsackTable() 955 function = BackupFileList._getKnapsackFunction(algorithm) 956 return function(table, capacity)[0]
    957
    958 - def generateSpan(self, capacity, algorithm="worst_fit"):
    959 """ 960 Splits the list of items into sub-lists that fit in a given capacity. 961 962 Sometimes, callers need split to a backup file list into a set of smaller 963 lists. For instance, you could use this to "span" the files across a set 964 of discs. 965 966 The fitting is done using the functions in the knapsack module. By 967 default, the first fit algorithm is used, but you can also choose 968 from best fit, worst fit and alternate fit. 969 970 @note: If any of your items are larger than the capacity, then it won't 971 be possible to find a solution. In this case, a value error will be 972 raised. 973 974 @param capacity: Maximum capacity among the files in the new list 975 @type capacity: Integer, in bytes 976 977 @param algorithm: Knapsack (fit) algorithm to use 978 @type algorithm: One of "first_fit", "best_fit", "worst_fit", "alternate_fit" 979 980 @return: List of L{SpanItem} objects. 981 982 @raise ValueError: If the algorithm is invalid. 983 @raise ValueError: If it's not possible to fit some items 984 """ 985 spanItems = [] 986 function = BackupFileList._getKnapsackFunction(algorithm) 987 table = self._getKnapsackTable(capacity) 988 iteration = 0 989 while len(table) > 0: 990 iteration += 1 991 fit = function(table, capacity) 992 if len(fit[0]) == 0: 993 # Should never happen due to validations in _convertToKnapsackForm(), but let's be safe 994 raise ValueError("After iteration %d, unable to add any new items." % iteration) 995 removeKeys(table, fit[0]) 996 utilization = (float(fit[1])/float(capacity))*100.0 997 item = SpanItem(fit[0], fit[1], capacity, utilization) 998 spanItems.append(item) 999 return spanItems
    1000
    1001 - def _getKnapsackTable(self, capacity=None):
    1002 """ 1003 Converts the list into the form needed by the knapsack algorithms. 1004 @return: Dictionary mapping file name to tuple of (file path, file size). 1005 """ 1006 table = { } 1007 for entry in self: 1008 if os.path.islink(entry): 1009 table[entry] = (entry, 0.0) 1010 elif os.path.isfile(entry): 1011 size = float(os.stat(entry).st_size) 1012 if capacity is not None: 1013 if size > capacity: 1014 raise ValueError("File [%s] cannot fit in capacity %s." % (entry, displayBytes(capacity))) 1015 table[entry] = (entry, size) 1016 return table
    1017 1018 @staticmethod
    1019 - def _getKnapsackFunction(algorithm):
    1020 """ 1021 Returns a reference to the function associated with an algorithm name. 1022 Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit" 1023 @param algorithm: Name of the algorithm 1024 @return: Reference to knapsack function 1025 @raise ValueError: If the algorithm name is unknown. 1026 """ 1027 if algorithm == "first_fit": 1028 return firstFit 1029 elif algorithm == "best_fit": 1030 return bestFit 1031 elif algorithm == "worst_fit": 1032 return worstFit 1033 elif algorithm == "alternate_fit": 1034 return alternateFit 1035 else: 1036 raise ValueError("Algorithm [%s] is invalid." % algorithm)
    1037
    1038 - def generateTarfile(self, path, mode='tar', ignore=False, flat=False):
    1039 """ 1040 Creates a tar file containing the files in the list. 1041 1042 By default, this method will create uncompressed tar files. If you pass 1043 in mode C{'targz'}, then it will create gzipped tar files, and if you 1044 pass in mode C{'tarbz2'}, then it will create bzipped tar files. 1045 1046 The tar file will be created as a GNU tar archive, which enables extended 1047 file name lengths, etc. Since GNU tar is so prevalent, I've decided that 1048 the extra functionality out-weighs the disadvantage of not being 1049 "standard". 1050 1051 If you pass in C{flat=True}, then a "flat" archive will be created, and 1052 all of the files will be added to the root of the archive. So, the file 1053 C{/tmp/something/whatever.txt} would be added as just C{whatever.txt}. 1054 1055 By default, the whole method call fails if there are problems adding any 1056 of the files to the archive, resulting in an exception. Under these 1057 circumstances, callers are advised that they might want to call 1058 L{removeInvalid()} and then attempt to extract the tar file a second 1059 time, since the most common cause of failures is a missing file (a file 1060 that existed when the list was built, but is gone again by the time the 1061 tar file is built). 1062 1063 If you want to, you can pass in C{ignore=True}, and the method will 1064 ignore errors encountered when adding individual files to the archive 1065 (but not errors opening and closing the archive itself). 1066 1067 We'll always attempt to remove the tarfile from disk if an exception will 1068 be thrown. 1069 1070 @note: No validation is done as to whether the entries in the list are 1071 files, since only files or soft links should be in an object like this. 1072 However, to be safe, everything is explicitly added to the tar archive 1073 non-recursively so it's safe to include soft links to directories. 1074 1075 @note: The Python C{tarfile} module, which is used internally here, is 1076 supposed to deal properly with long filenames and links. In my testing, 1077 I have found that it appears to be able to add long really long filenames 1078 to archives, but doesn't do a good job reading them back out, even out of 1079 an archive it created. Fortunately, all Cedar Backup does is add files 1080 to archives. 1081 1082 @param path: Path of tar file to create on disk 1083 @type path: String representing a path on disk 1084 1085 @param mode: Tar creation mode 1086 @type mode: One of either C{'tar'}, C{'targz'} or C{'tarbz2'} 1087 1088 @param ignore: Indicates whether to ignore certain errors. 1089 @type ignore: Boolean 1090 1091 @param flat: Creates "flat" archive by putting all items in root 1092 @type flat: Boolean 1093 1094 @raise ValueError: If mode is not valid 1095 @raise ValueError: If list is empty 1096 @raise ValueError: If the path could not be encoded properly. 1097 @raise TarError: If there is a problem creating the tar file 1098 """ 1099 # pylint: disable=E1101 1100 path = encodePath(path) 1101 if len(self) == 0: raise ValueError("Empty list cannot be used to generate tarfile.") 1102 if mode == 'tar': tarmode = "w:" 1103 elif mode == 'targz': tarmode = "w:gz" 1104 elif mode == 'tarbz2': tarmode = "w:bz2" 1105 else: raise ValueError("Mode [%s] is not valid." % mode) 1106 try: 1107 tar = tarfile.open(path, tarmode) 1108 try: 1109 tar.format = tarfile.GNU_FORMAT 1110 except AttributeError: 1111 tar.posix = False 1112 for entry in self: 1113 try: 1114 if flat: 1115 tar.add(entry, arcname=os.path.basename(entry), recursive=False) 1116 else: 1117 tar.add(entry, recursive=False) 1118 except tarfile.TarError, e: 1119 if not ignore: 1120 raise e 1121 logger.info("Unable to add file [%s]; going on anyway.", entry) 1122 except OSError, e: 1123 if not ignore: 1124 raise tarfile.TarError(e) 1125 logger.info("Unable to add file [%s]; going on anyway.", entry) 1126 tar.close() 1127 except tarfile.ReadError, e: 1128 try: tar.close() 1129 except: pass 1130 if os.path.exists(path): 1131 try: os.remove(path) 1132 except: pass 1133 raise tarfile.ReadError("Unable to open [%s]; maybe directory doesn't exist?" % path) 1134 except tarfile.TarError, e: 1135 try: tar.close() 1136 except: pass 1137 if os.path.exists(path): 1138 try: os.remove(path) 1139 except: pass 1140 raise e
    1141
    1142 - def removeUnchanged(self, digestMap, captureDigest=False):
    1143 """ 1144 Removes unchanged entries from the list. 1145 1146 This method relies on a digest map as returned from L{generateDigestMap}. 1147 For each entry in C{digestMap}, if the entry also exists in the current 1148 list I{and} the entry in the current list has the same digest value as in 1149 the map, the entry in the current list will be removed. 1150 1151 This method offers a convenient way for callers to filter unneeded 1152 entries from a list. The idea is that a caller will capture a digest map 1153 from C{generateDigestMap} at some point in time (perhaps the beginning of 1154 the week), and will save off that map using C{pickle} or some other 1155 method. Then, the caller could use this method sometime in the future to 1156 filter out any unchanged files based on the saved-off map. 1157 1158 If C{captureDigest} is passed-in as C{True}, then digest information will 1159 be captured for the entire list before the removal step occurs using the 1160 same rules as in L{generateDigestMap}. The check will involve a lookup 1161 into the complete digest map. 1162 1163 If C{captureDigest} is passed in as C{False}, we will only generate a 1164 digest value for files we actually need to check, and we'll ignore any 1165 entry in the list which isn't a file that currently exists on disk. 1166 1167 The return value varies depending on C{captureDigest}, as well. To 1168 preserve backwards compatibility, if C{captureDigest} is C{False}, then 1169 we'll just return a single value representing the number of entries 1170 removed. Otherwise, we'll return a tuple of C{(entries removed, digest 1171 map)}. The returned digest map will be in exactly the form returned by 1172 L{generateDigestMap}. 1173 1174 @note: For performance reasons, this method actually ends up rebuilding 1175 the list from scratch. First, we build a temporary dictionary containing 1176 all of the items from the original list. Then, we remove items as needed 1177 from the dictionary (which is faster than the equivalent operation on a 1178 list). Finally, we replace the contents of the current list based on the 1179 keys left in the dictionary. This should be transparent to the caller. 1180 1181 @param digestMap: Dictionary mapping file name to digest value. 1182 @type digestMap: Map as returned from L{generateDigestMap}. 1183 1184 @param captureDigest: Indicates that digest information should be captured. 1185 @type captureDigest: Boolean 1186 1187 @return: Results as discussed above (format varies based on arguments) 1188 """ 1189 if captureDigest: 1190 removed = 0 1191 table = {} 1192 captured = {} 1193 for entry in self: 1194 if os.path.isfile(entry) and not os.path.islink(entry): 1195 table[entry] = BackupFileList._generateDigest(entry) 1196 captured[entry] = table[entry] 1197 else: 1198 table[entry] = None 1199 for entry in digestMap.keys(): 1200 if table.has_key(entry): 1201 if table[entry] is not None: # equivalent to file/link check in other case 1202 digest = table[entry] 1203 if digest == digestMap[entry]: 1204 removed += 1 1205 del table[entry] 1206 logger.debug("Discarded unchanged file [%s].", entry) 1207 self[:] = table.keys() 1208 return (removed, captured) 1209 else: 1210 removed = 0 1211 table = {} 1212 for entry in self: 1213 table[entry] = None 1214 for entry in digestMap.keys(): 1215 if table.has_key(entry): 1216 if os.path.isfile(entry) and not os.path.islink(entry): 1217 digest = BackupFileList._generateDigest(entry) 1218 if digest == digestMap[entry]: 1219 removed += 1 1220 del table[entry] 1221 logger.debug("Discarded unchanged file [%s].", entry) 1222 self[:] = table.keys() 1223 return removed
    1224
    1225 1226 ######################################################################## 1227 # PurgeItemList class definition 1228 ######################################################################## 1229 1230 -class PurgeItemList(FilesystemList): # pylint: disable=R0904
    1231 1232 ###################### 1233 # Class documentation 1234 ###################### 1235 1236 """ 1237 List of files and directories to be purged. 1238 1239 A PurgeItemList is a L{FilesystemList} containing a list of files and 1240 directories to be purged. On top of the generic functionality provided by 1241 L{FilesystemList}, this class adds functionality to remove items that are 1242 too young to be purged, and to actually remove each item in the list from 1243 the filesystem. 1244 1245 The other main difference is that when you add a directory's contents to a 1246 purge item list, the directory itself is not added to the list. This way, 1247 if someone asks to purge within in C{/opt/backup/collect}, that directory 1248 doesn't get removed once all of the files within it is gone. 1249 """ 1250 1251 ############## 1252 # Constructor 1253 ############## 1254
    1255 - def __init__(self):
    1256 """Initializes a list with no configured exclusions.""" 1257 FilesystemList.__init__(self)
    1258 1259 1260 ############## 1261 # Add methods 1262 ############## 1263
    1264 - def addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False):
    1265 """ 1266 Adds the contents of a directory to the list. 1267 1268 The path must exist and must be a directory or a link to a directory. 1269 The contents of the directory (but I{not} the directory path itself) will 1270 be recursively added to the list, subject to any exclusions that are in 1271 place. If you only want the directory and its contents to be added, then 1272 pass in C{recursive=False}. 1273 1274 @note: If a directory's absolute path matches an exclude pattern or path, 1275 or if the directory contains the configured ignore file, then the 1276 directory and all of its contents will be recursively excluded from the 1277 list. 1278 1279 @note: If the passed-in directory happens to be a soft link, it will be 1280 recursed. However, the linkDepth parameter controls whether any soft 1281 links I{within} the directory will be recursed. The link depth is 1282 maximum depth of the tree at which soft links should be followed. So, a 1283 depth of 0 does not follow any soft links, a depth of 1 follows only 1284 links within the passed-in directory, a depth of 2 follows the links at 1285 the next level down, etc. 1286 1287 @note: Any invalid soft links (i.e. soft links that point to 1288 non-existent items) will be silently ignored. 1289 1290 @note: The L{excludeDirs} flag only controls whether any given soft link 1291 path itself is added to the list once it has been discovered. It does 1292 I{not} modify any behavior related to directory recursion. 1293 1294 @note: The L{excludeDirs} flag only controls whether any given directory 1295 path itself is added to the list once it has been discovered. It does 1296 I{not} modify any behavior related to directory recursion. 1297 1298 @note: If you call this method I{on a link to a directory} that link will 1299 never be dereferenced (it may, however, be followed). 1300 1301 @param path: Directory path whose contents should be added to the list 1302 @type path: String representing a path on disk 1303 1304 @param recursive: Indicates whether directory contents should be added recursively. 1305 @type recursive: Boolean value 1306 1307 @param addSelf: Ignored in this subclass. 1308 1309 @param linkDepth: Depth of soft links that should be followed 1310 @type linkDepth: Integer value, where zero means not to follow any soft links 1311 1312 @param dereference: Indicates whether soft links, if followed, should be dereferenced 1313 @type dereference: Boolean value 1314 1315 @return: Number of items recursively added to the list 1316 1317 @raise ValueError: If path is not a directory or does not exist. 1318 @raise ValueError: If the path could not be encoded properly. 1319 """ 1320 path = encodePath(path) 1321 path = normalizeDir(path) 1322 return super(PurgeItemList, self)._addDirContentsInternal(path, False, recursive, linkDepth, dereference)
    1323 1324 1325 ################## 1326 # Utility methods 1327 ################## 1328
    1329 - def removeYoungFiles(self, daysOld):
    1330 """ 1331 Removes from the list files younger than a certain age (in days). 1332 1333 Any file whose "age" in days is less than (C{<}) the value of the 1334 C{daysOld} parameter will be removed from the list so that it will not be 1335 purged later when L{purgeItems} is called. Directories and soft links 1336 will be ignored. 1337 1338 The "age" of a file is the amount of time since the file was last used, 1339 per the most recent of the file's C{st_atime} and C{st_mtime} values. 1340 1341 @note: Some people find the "sense" of this method confusing or 1342 "backwards". Keep in mind that this method is used to remove items 1343 I{from the list}, not from the filesystem! It removes from the list 1344 those items that you would I{not} want to purge because they are too 1345 young. As an example, passing in C{daysOld} of zero (0) would remove 1346 from the list no files, which would result in purging all of the files 1347 later. I would be happy to make a synonym of this method with an 1348 easier-to-understand "sense", if someone can suggest one. 1349 1350 @param daysOld: Minimum age of files that are to be kept in the list. 1351 @type daysOld: Integer value >= 0. 1352 1353 @return: Number of entries removed 1354 """ 1355 removed = 0 1356 daysOld = int(daysOld) 1357 if daysOld < 0: 1358 raise ValueError("Days old value must be an integer >= 0.") 1359 for entry in self[:]: 1360 if os.path.isfile(entry) and not os.path.islink(entry): 1361 try: 1362 ageInDays = calculateFileAge(entry) 1363 ageInWholeDays = math.floor(ageInDays) 1364 if ageInWholeDays < 0: ageInWholeDays = 0 1365 if ageInWholeDays < daysOld: 1366 removed += 1 1367 self.remove(entry) 1368 except OSError: 1369 pass 1370 return removed
    1371
    1372 - def purgeItems(self):
    1373 """ 1374 Purges all items in the list. 1375 1376 Every item in the list will be purged. Directories in the list will 1377 I{not} be purged recursively, and hence will only be removed if they are 1378 empty. Errors will be ignored. 1379 1380 To faciliate easy removal of directories that will end up being empty, 1381 the delete process happens in two passes: files first (including soft 1382 links), then directories. 1383 1384 @return: Tuple containing count of (files, dirs) removed 1385 """ 1386 files = 0 1387 dirs = 0 1388 for entry in self: 1389 if os.path.exists(entry) and (os.path.isfile(entry) or os.path.islink(entry)): 1390 try: 1391 os.remove(entry) 1392 files += 1 1393 logger.debug("Purged file [%s].", entry) 1394 except OSError: 1395 pass 1396 for entry in self: 1397 if os.path.exists(entry) and os.path.isdir(entry) and not os.path.islink(entry): 1398 try: 1399 os.rmdir(entry) 1400 dirs += 1 1401 logger.debug("Purged empty directory [%s].", entry) 1402 except OSError: 1403 pass 1404 return (files, dirs)
    1405
    1406 1407 ######################################################################## 1408 # Public functions 1409 ######################################################################## 1410 1411 ########################## 1412 # normalizeDir() function 1413 ########################## 1414 1415 -def normalizeDir(path):
    1416 """ 1417 Normalizes a directory name. 1418 1419 For our purposes, a directory name is normalized by removing the trailing 1420 path separator, if any. This is important because we want directories to 1421 appear within lists in a consistent way, although from the user's 1422 perspective passing in C{/path/to/dir/} and C{/path/to/dir} are equivalent. 1423 1424 @param path: Path to be normalized. 1425 @type path: String representing a path on disk 1426 1427 @return: Normalized path, which should be equivalent to the original. 1428 """ 1429 if path != os.sep and path[-1:] == os.sep: 1430 return path[:-1] 1431 return path
    1432
    1433 1434 ############################# 1435 # compareContents() function 1436 ############################# 1437 1438 -def compareContents(path1, path2, verbose=False):
    1439 """ 1440 Compares the contents of two directories to see if they are equivalent. 1441 1442 The two directories are recursively compared. First, we check whether they 1443 contain exactly the same set of files. Then, we check to see every given 1444 file has exactly the same contents in both directories. 1445 1446 This is all relatively simple to implement through the magic of 1447 L{BackupFileList.generateDigestMap}, which knows how to strip a path prefix 1448 off the front of each entry in the mapping it generates. This makes our 1449 comparison as simple as creating a list for each path, then generating a 1450 digest map for each path and comparing the two. 1451 1452 If no exception is thrown, the two directories are considered identical. 1453 1454 If the C{verbose} flag is C{True}, then an alternate (but slower) method is 1455 used so that any thrown exception can indicate exactly which file caused the 1456 comparison to fail. The thrown C{ValueError} exception distinguishes 1457 between the directories containing different files, and containing the same 1458 files with differing content. 1459 1460 @note: Symlinks are I{not} followed for the purposes of this comparison. 1461 1462 @param path1: First path to compare. 1463 @type path1: String representing a path on disk 1464 1465 @param path2: First path to compare. 1466 @type path2: String representing a path on disk 1467 1468 @param verbose: Indicates whether a verbose response should be given. 1469 @type verbose: Boolean 1470 1471 @raise ValueError: If a directory doesn't exist or can't be read. 1472 @raise ValueError: If the two directories are not equivalent. 1473 @raise IOError: If there is an unusual problem reading the directories. 1474 """ 1475 try: 1476 path1List = BackupFileList() 1477 path1List.addDirContents(path1) 1478 path1Digest = path1List.generateDigestMap(stripPrefix=normalizeDir(path1)) 1479 path2List = BackupFileList() 1480 path2List.addDirContents(path2) 1481 path2Digest = path2List.generateDigestMap(stripPrefix=normalizeDir(path2)) 1482 compareDigestMaps(path1Digest, path2Digest, verbose) 1483 except IOError, e: 1484 logger.error("I/O error encountered during consistency check.") 1485 raise e
    1486
    1487 -def compareDigestMaps(digest1, digest2, verbose=False):
    1488 """ 1489 Compares two digest maps and throws an exception if they differ. 1490 1491 @param digest1: First digest to compare. 1492 @type digest1: Digest as returned from BackupFileList.generateDigestMap() 1493 1494 @param digest2: Second digest to compare. 1495 @type digest2: Digest as returned from BackupFileList.generateDigestMap() 1496 1497 @param verbose: Indicates whether a verbose response should be given. 1498 @type verbose: Boolean 1499 1500 @raise ValueError: If the two directories are not equivalent. 1501 """ 1502 if not verbose: 1503 if digest1 != digest2: 1504 raise ValueError("Consistency check failed.") 1505 else: 1506 list1 = UnorderedList(digest1.keys()) 1507 list2 = UnorderedList(digest2.keys()) 1508 if list1 != list2: 1509 raise ValueError("Directories contain a different set of files.") 1510 for key in list1: 1511 if digest1[key] != digest2[key]: 1512 raise ValueError("File contents for [%s] vary between directories." % key)
    1513

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.filesystem.FilesystemList-class.html0000664000175000017500000021636112642035644031663 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem.FilesystemList
    Package CedarBackup2 :: Module filesystem :: Class FilesystemList
    [hide private]
    [frames] | no frames]

    Class FilesystemList

    source code

    object --+    
             |    
          list --+
                 |
                FilesystemList
    
    Known Subclasses:

    Represents a list of filesystem items.

    This is a generic class that represents a list of filesystem items. Callers can add individual files or directories to the list, or can recursively add the contents of a directory. The class also allows for up-front exclusions in several forms (all files, all directories, all items matching a pattern, all items whose basename matches a pattern, or all directories containing a specific "ignore file"). Symbolic links are typically backed up non-recursively, i.e. the link to a directory is backed up, but not the contents of that link (we don't want to deal with recursive loops, etc.).

    The custom methods such as addFile will only add items if they exist on the filesystem and do not match any exclusions that are already in place. However, since a FilesystemList is a subclass of Python's standard list class, callers can also add items to the list in the usual way, using methods like append() or insert(). No validations apply to items added to the list in this way; however, many list-manipulation methods deal "gracefully" with items that don't exist in the filesystem, often by ignoring them.

    Once a list has been created, callers can remove individual items from the list using standard methods like pop() or remove() or they can use custom methods to remove specific types of entries or entries which match a particular pattern.


    Notes:
    • Regular expression patterns that apply to paths are assumed to be bounded at front and back by the beginning and end of the string, i.e. they are treated as if they begin with ^ and end with $. This is true whether we are matching a complete path or a basename.
    • Some platforms, like Windows, do not support soft links. On those platforms, the ignore-soft-links flag can be set, but it won't do any good because the operating system never reports a file as a soft link.
    Instance Methods [hide private]
    new empty list
    __init__(self)
    Initializes a list with no configured exclusions.
    source code
     
    addFile(self, path)
    Adds a file to the list.
    source code
     
    addDir(self, path)
    Adds a directory to the list.
    source code
     
    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)
    Adds the contents of a directory to the list.
    source code
     
    removeFiles(self, pattern=None)
    Removes file entries from the list.
    source code
     
    removeDirs(self, pattern=None)
    Removes directory entries from the list.
    source code
     
    removeLinks(self, pattern=None)
    Removes soft link entries from the list.
    source code
     
    removeMatch(self, pattern)
    Removes from the list all entries matching a pattern.
    source code
     
    removeInvalid(self)
    Removes from the list all entries that do not exist on disk.
    source code
     
    normalize(self)
    Normalizes the list, ensuring that each entry is unique.
    source code
     
    _setExcludeFiles(self, value)
    Property target used to set the exclude files flag.
    source code
     
    _getExcludeFiles(self)
    Property target used to get the exclude files flag.
    source code
     
    _setExcludeDirs(self, value)
    Property target used to set the exclude directories flag.
    source code
     
    _getExcludeDirs(self)
    Property target used to get the exclude directories flag.
    source code
     
    _setExcludeLinks(self, value)
    Property target used to set the exclude soft links flag.
    source code
     
    _getExcludeLinks(self)
    Property target used to get the exclude soft links flag.
    source code
     
    _setExcludePaths(self, value)
    Property target used to set the exclude paths list.
    source code
     
    _getExcludePaths(self)
    Property target used to get the absolute exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code
     
    _setExcludeBasenamePatterns(self, value)
    Property target used to set the exclude basename patterns list.
    source code
     
    _getExcludeBasenamePatterns(self)
    Property target used to get the exclude basename patterns list.
    source code
     
    _setIgnoreFile(self, value)
    Property target used to set the ignore file.
    source code
     
    _getIgnoreFile(self)
    Property target used to get the ignore file.
    source code
     
    _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False)
    Internal implementation of addDirContents.
    source code
     
    verify(self)
    Verifies that all entries in the list exist on disk.
    source code

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]
      excludeFiles
    Boolean indicating whether files should be excluded.
      excludeDirs
    Boolean indicating whether directories should be excluded.
      excludeLinks
    Boolean indicating whether soft links should be excluded.
      excludePaths
    List of absolute paths to be excluded.
      excludePatterns
    List of regular expression patterns (matching complete path) to be excluded.
      excludeBasenamePatterns
    List of regular expression patterns (matching basename) to be excluded.
      ignoreFile
    Name of file which will cause directory contents to be ignored.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Initializes a list with no configured exclusions.

    Returns: new empty list
    Overrides: object.__init__

    addFile(self, path)

    source code 

    Adds a file to the list.

    The path must exist and must be a file or a link to an existing file. It will be added to the list subject to any exclusions that are in place.

    Parameters:
    • path (String representing a path on disk) - File path to be added to the list
    Returns:
    Number of items added to the list.
    Raises:
    • ValueError - If path is not a file or does not exist.
    • ValueError - If the path could not be encoded properly.

    addDir(self, path)

    source code 

    Adds a directory to the list.

    The path must exist and must be a directory or a link to an existing directory. It will be added to the list subject to any exclusions that are in place. The ignoreFile does not apply to this method, only to addDirContents.

    Parameters:
    • path (String representing a path on disk) - Directory path to be added to the list
    Returns:
    Number of items added to the list.
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.

    addDirContents(self, path, recursive=True, addSelf=True, linkDepth=0, dereference=False)

    source code 

    Adds the contents of a directory to the list.

    The path must exist and must be a directory or a link to a directory. The contents of the directory (as well as the directory path itself) will be recursively added to the list, subject to any exclusions that are in place. If you only want the directory and its immediate contents to be added, then pass in recursive=False.

    Parameters:
    • path (String representing a path on disk) - Directory path whose contents should be added to the list
    • recursive (Boolean value) - Indicates whether directory contents should be added recursively.
    • addSelf (Boolean value) - Indicates whether the directory itself should be added to the list.
    • linkDepth (Integer value, where zero means not to follow any soft links) - Maximum depth of the tree at which soft links should be followed
    • dereference (Boolean value) - Indicates whether soft links, if followed, should be dereferenced
    Returns:
    Number of items recursively added to the list
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.
    Notes:
    • If a directory's absolute path matches an exclude pattern or path, or if the directory contains the configured ignore file, then the directory and all of its contents will be recursively excluded from the list.
    • If the passed-in directory happens to be a soft link, it will be recursed. However, the linkDepth parameter controls whether any soft links within the directory will be recursed. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links, a depth of 1 follows only links within the passed-in directory, a depth of 2 follows the links at the next level down, etc.
    • Any invalid soft links (i.e. soft links that point to non-existent items) will be silently ignored.
    • The excludeDirs flag only controls whether any given directory path itself is added to the list once it has been discovered. It does not modify any behavior related to directory recursion.
    • If you call this method on a link to a directory that link will never be dereferenced (it may, however, be followed).

    removeFiles(self, pattern=None)

    source code 

    Removes file entries from the list.

    If pattern is not passed in or is None, then all file entries will be removed from the list. Otherwise, only those file entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use removeInvalid to purge those entries).

    This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all files, then you will be better off setting excludeFiles to True before adding items to the list.

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    removeDirs(self, pattern=None)

    source code 

    Removes directory entries from the list.

    If pattern is not passed in or is None, then all directory entries will be removed from the list. Otherwise, only those directory entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use removeInvalid to purge those entries).

    This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all directories, then you will be better off setting excludeDirs to True before adding items to the list (note that this will not prevent you from recursively adding the contents of directories).

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    removeLinks(self, pattern=None)

    source code 

    Removes soft link entries from the list.

    If pattern is not passed in or is None, then all soft link entries will be removed from the list. Otherwise, only those soft link entries matching the pattern will be removed. Any entry which does not exist on disk will be ignored (use removeInvalid to purge those entries).

    This method might be fairly slow for large lists, since it must check the type of each item in the list. If you know ahead of time that you want to exclude all soft links, then you will be better off setting excludeLinks to True before adding items to the list.

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    removeMatch(self, pattern)

    source code 

    Removes from the list all entries matching a pattern.

    This method removes from the list all entries which match the passed in pattern. Since there is no need to check the type of each entry, it is faster to call this method than to call the removeFiles, removeDirs or removeLinks methods individually. If you know which patterns you will want to remove ahead of time, you may be better off setting excludePatterns or excludeBasenamePatterns before adding items to the list.

    Parameters:
    • pattern - Regular expression pattern representing entries to remove
    Returns:
    Number of entries removed.
    Raises:
    • ValueError - If the passed-in pattern is not a valid regular expression.

    Note: Unlike when using the exclude lists, the pattern here is not bounded at the front and the back of the string. You can use any pattern you want.

    removeInvalid(self)

    source code 

    Removes from the list all entries that do not exist on disk.

    This method removes from the list all entries which do not currently exist on disk in some form. No attention is paid to whether the entries are files or directories.

    Returns:
    Number of entries removed.

    _setExcludeFiles(self, value)

    source code 

    Property target used to set the exclude files flag. No validations, but we normalize the value to True or False.

    _setExcludeDirs(self, value)

    source code 

    Property target used to set the exclude directories flag. No validations, but we normalize the value to True or False.

    _setExcludeLinks(self, value)

    source code 

    Property target used to set the exclude soft links flag. No validations, but we normalize the value to True or False.

    _setExcludePaths(self, value)

    source code 

    Property target used to set the exclude paths list. A None value is converted to an empty list. Elements do not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If any list element is not an absolute path.

    _setExcludePatterns(self, value)

    source code 

    Property target used to set the exclude patterns list. A None value is converted to an empty list.

    _setExcludeBasenamePatterns(self, value)

    source code 

    Property target used to set the exclude basename patterns list. A None value is converted to an empty list.

    _setIgnoreFile(self, value)

    source code 

    Property target used to set the ignore file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _addDirContentsInternal(self, path, includePath=True, recursive=True, linkDepth=0, dereference=False)

    source code 

    Internal implementation of addDirContents.

    This internal implementation exists due to some refactoring. Basically, some subclasses have a need to add the contents of a directory, but not the directory itself. This is different than the standard FilesystemList behavior and actually ends up making a special case out of the first call in the recursive chain. Since I don't want to expose the modified interface, addDirContents ends up being wholly implemented in terms of this method.

    The linkDepth parameter controls whether soft links are followed when we are adding the contents recursively. Any recursive calls reduce the value by one. If the value zero or less, then soft links will just be added as directories, but will not be followed. This means that links are followed to a constant depth starting from the top-most directory.

    There is one difference between soft links and directories: soft links that are added recursively are not placed into the list explicitly. This is because if we do add the links recursively, the resulting tar file gets a little confused (it has a link and a directory with the same name).

    Parameters:
    • path - Directory path whose contents should be added to the list.
    • includePath - Indicates whether to include the path as well as contents.
    • recursive - Indicates whether directory contents should be added recursively.
    • linkDepth - Depth of soft links that should be followed
    • dereference - Indicates whether soft links, if followed, should be dereferenced
    Returns:
    Number of items recursively added to the list
    Raises:
    • ValueError - If path is not a directory or does not exist.

    Note: If you call this method on a link to a directory that link will never be dereferenced (it may, however, be followed).

    verify(self)

    source code 

    Verifies that all entries in the list exist on disk.

    Returns:
    True if all entries exist, False otherwise.

    Property Details [hide private]

    excludeFiles

    Boolean indicating whether files should be excluded.

    Get Method:
    _getExcludeFiles(self) - Property target used to get the exclude files flag.
    Set Method:
    _setExcludeFiles(self, value) - Property target used to set the exclude files flag.

    excludeDirs

    Boolean indicating whether directories should be excluded.

    Get Method:
    _getExcludeDirs(self) - Property target used to get the exclude directories flag.
    Set Method:
    _setExcludeDirs(self, value) - Property target used to set the exclude directories flag.

    excludeLinks

    Boolean indicating whether soft links should be excluded.

    Get Method:
    _getExcludeLinks(self) - Property target used to get the exclude soft links flag.
    Set Method:
    _setExcludeLinks(self, value) - Property target used to set the exclude soft links flag.

    excludePaths

    List of absolute paths to be excluded.

    Get Method:
    _getExcludePaths(self) - Property target used to get the absolute exclude paths list.
    Set Method:
    _setExcludePaths(self, value) - Property target used to set the exclude paths list.

    excludePatterns

    List of regular expression patterns (matching complete path) to be excluded.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    excludeBasenamePatterns

    List of regular expression patterns (matching basename) to be excluded.

    Get Method:
    _getExcludeBasenamePatterns(self) - Property target used to get the exclude basename patterns list.
    Set Method:
    _setExcludeBasenamePatterns(self, value) - Property target used to set the exclude basename patterns list.

    ignoreFile

    Name of file which will cause directory contents to be ignored.

    Get Method:
    _getIgnoreFile(self) - Property target used to get the ignore file.
    Set Method:
    _setIgnoreFile(self, value) - Property target used to set the ignore file.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions.validate-module.html0000664000175000017500000000506712642035643030711 0ustar pronovicpronovic00000000000000 validate

    Module validate


    Functions

    executeValidate

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.ExtendedAction-class.html0000664000175000017500000010741012642035644030654 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ExtendedAction
    Package CedarBackup2 :: Module config :: Class ExtendedAction
    [hide private]
    [frames] | no frames]

    Class ExtendedAction

    source code

    object --+
             |
            ExtendedAction
    

    Class representing an extended action.

    Essentially, an extended action needs to allow the following to happen:

      exec("from %s import %s" % (module, function))
      exec("%s(action, configPath")" % function)
    

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string consisting of lower-case letters and digits.
    • The module must be a non-empty string and a valid Python identifier.
    • The function must be an on-empty string and a valid Python identifier.
    • If set, the index must be a positive integer.
    • If set, the dependencies attribute must be an ActionDependencies object.
    Instance Methods [hide private]
     
    __init__(self, name=None, module=None, function=None, index=None, dependencies=None)
    Constructor for the ExtendedAction class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setName(self, value)
    Property target used to set the action name.
    source code
     
    _getName(self)
    Property target used to get the action name.
    source code
     
    _setModule(self, value)
    Property target used to set the module name.
    source code
     
    _getModule(self)
    Property target used to get the module name.
    source code
     
    _setFunction(self, value)
    Property target used to set the function name.
    source code
     
    _getFunction(self)
    Property target used to get the function name.
    source code
     
    _setIndex(self, value)
    Property target used to set the action index.
    source code
     
    _getIndex(self)
    Property target used to get the action index.
    source code
     
    _setDependencies(self, value)
    Property target used to set the action dependencies information.
    source code
     
    _getDependencies(self)
    Property target used to get action dependencies information.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      name
    Name of the extended action.
      module
    Name of the module containing the extended action function.
      function
    Name of the extended action function.
      index
    Index of action, used for execution ordering.
      dependencies
    Dependencies for action, used for execution ordering.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, module=None, function=None, index=None, dependencies=None)
    (Constructor)

    source code 

    Constructor for the ExtendedAction class.

    Parameters:
    • name - Name of the extended action
    • module - Name of the module containing the extended action function
    • function - Name of the extended action function
    • index - Index of action, used for execution ordering
    • dependencies - Dependencies for action, used for execution ordering
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setName(self, value)

    source code 

    Property target used to set the action name. The value must be a non-empty string if it is not None. It must also consist only of lower-case letters and digits.

    Raises:
    • ValueError - If the value is an empty string.

    _setModule(self, value)

    source code 

    Property target used to set the module name. The value must be a non-empty string if it is not None. It must also be a valid Python identifier.

    Raises:
    • ValueError - If the value is an empty string.

    _setFunction(self, value)

    source code 

    Property target used to set the function name. The value must be a non-empty string if it is not None. It must also be a valid Python identifier.

    Raises:
    • ValueError - If the value is an empty string.

    _setIndex(self, value)

    source code 

    Property target used to set the action index. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setDependencies(self, value)

    source code 

    Property target used to set the action dependencies information. If not None, the value must be a ActionDependecies object.

    Raises:
    • ValueError - If the value is not a ActionDependencies object.

    Property Details [hide private]

    name

    Name of the extended action.

    Get Method:
    _getName(self) - Property target used to get the action name.
    Set Method:
    _setName(self, value) - Property target used to set the action name.

    module

    Name of the module containing the extended action function.

    Get Method:
    _getModule(self) - Property target used to get the module name.
    Set Method:
    _setModule(self, value) - Property target used to set the module name.

    function

    Name of the extended action function.

    Get Method:
    _getFunction(self) - Property target used to get the function name.
    Set Method:
    _setFunction(self, value) - Property target used to set the function name.

    index

    Index of action, used for execution ordering.

    Get Method:
    _getIndex(self) - Property target used to get the action index.
    Set Method:
    _setIndex(self, value) - Property target used to set the action index.

    dependencies

    Dependencies for action, used for execution ordering.

    Get Method:
    _getDependencies(self) - Property target used to get action dependencies information.
    Set Method:
    _setDependencies(self, value) - Property target used to set the action dependencies information.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.UnorderedList-class.html0000664000175000017500000005333512642035644030257 0ustar pronovicpronovic00000000000000 CedarBackup2.util.UnorderedList
    Package CedarBackup2 :: Module util :: Class UnorderedList
    [hide private]
    [frames] | no frames]

    Class UnorderedList

    source code

    object --+    
             |    
          list --+
                 |
                UnorderedList
    
    Known Subclasses:

    Class representing an "unordered list".

    An "unordered list" is a list in which only the contents matter, not the order in which the contents appear in the list.

    For instance, we might be keeping track of set of paths in a list, because it's convenient to have them in that form. However, for comparison purposes, we would only care that the lists contain exactly the same contents, regardless of order.

    I have come up with two reasonable ways of doing this, plus a couple more that would work but would be a pain to implement. My first method is to copy and sort each list, comparing the sorted versions. This will only work if two lists with exactly the same members are guaranteed to sort in exactly the same order. The second way would be to create two Sets and then compare the sets. However, this would lose information about any duplicates in either list. I've decided to go with option #1 for now. I'll modify this code if I run into problems in the future.

    We override the original __eq__, __ne__, __ge__, __gt__, __le__ and __lt__ list methods to change the definition of the various comparison operators. In all cases, the comparison is changed to return the result of the original operation but instead comparing sorted lists. This is going to be quite a bit slower than a normal list, so you probably only want to use it on small lists.

    Instance Methods [hide private]
     
    __eq__(self, other)
    Definition of == operator for this class.
    source code
     
    __ne__(self, other)
    Definition of != operator for this class.
    source code
     
    __ge__(self, other)
    Definition of ≥ operator for this class.
    source code
     
    __gt__(self, other)
    Definition of > operator for this class.
    source code
     
    __le__(self, other)
    Definition of ≤ operator for this class.
    source code
     
    __lt__(self, other)
    Definition of < operator for this class.
    source code

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __init__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __eq__(self, other)
    (Equality operator)

    source code 

    Definition of == operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self == other.
    Overrides: list.__eq__

    __ne__(self, other)

    source code 

    Definition of != operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self != other.
    Overrides: list.__ne__

    __ge__(self, other)
    (Greater-than-or-equals operator)

    source code 

    Definition of ≥ operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self >= other.
    Overrides: list.__ge__

    __gt__(self, other)
    (Greater-than operator)

    source code 

    Definition of > operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self > other.
    Overrides: list.__gt__

    __le__(self, other)
    (Less-than-or-equals operator)

    source code 

    Definition of ≤ operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self <= other.
    Overrides: list.__le__

    __lt__(self, other)
    (Less-than operator)

    source code 

    Definition of < operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    True/false depending on whether self < other.
    Overrides: list.__lt__

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.capacity.PercentageQuantity-class.html0000664000175000017500000005506712642035644033422 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity.PercentageQuantity
    Package CedarBackup2 :: Package extend :: Module capacity :: Class PercentageQuantity
    [hide private]
    [frames] | no frames]

    Class PercentageQuantity

    source code

    object --+
             |
            PercentageQuantity
    

    Class representing a percentage quantity.

    The percentage is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.)

    Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative percentage in this context.

    Instance Methods [hide private]
     
    __init__(self, quantity=None)
    Constructor for the PercentageQuantity class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setQuantity(self, value)
    Property target used to set the quantity The value must be a non-empty string if it is not None.
    source code
     
    _getQuantity(self)
    Property target used to get the quantity.
    source code
     
    _getPercentage(self)
    Property target used to get the quantity as a floating point number.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      quantity
    Percentage value, as a string
      percentage
    Percentage value, as a floating point number.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, quantity=None)
    (Constructor)

    source code 

    Constructor for the PercentageQuantity class.

    Parameters:
    • quantity - Percentage quantity, as a string (i.e. "99.9" or "12")
    Raises:
    • ValueError - If the quantity value is invaid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setQuantity(self, value)

    source code 

    Property target used to set the quantity The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value is not a valid floating point number
    • ValueError - If the value is less than zero

    _getPercentage(self)

    source code 

    Property target used to get the quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned.


    Property Details [hide private]

    quantity

    Percentage value, as a string

    Get Method:
    _getQuantity(self) - Property target used to get the quantity.
    Set Method:
    _setQuantity(self, value) - Property target used to set the quantity The value must be a non-empty string if it is not None.

    percentage

    Percentage value, as a floating point number.

    Get Method:
    _getPercentage(self) - Property target used to get the quantity as a floating point number.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.subversion-pysrc.html0000664000175000017500000216221712642035647030245 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion
    Package CedarBackup2 :: Package extend :: Module subversion
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.subversion

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2005,2007,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 2 (>= 2.7) 
      29  # Project  : Official Cedar Backup Extensions 
      30  # Purpose  : Provides an extension to back up Subversion repositories. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides an extension to back up Subversion repositories. 
      40   
      41  This is a Cedar Backup extension used to back up Subversion repositories via 
      42  the Cedar Backup command line.  Each Subversion repository can be backed using 
      43  the same collect modes allowed for filesystems in the standard Cedar Backup 
      44  collect action: weekly, daily, incremental. 
      45   
      46  This extension requires a new configuration section <subversion> and is 
      47  intended to be run either immediately before or immediately after the standard 
      48  collect action.  Aside from its own configuration, it requires the options and 
      49  collect configuration sections in the standard Cedar Backup configuration file. 
      50   
      51  There are two different kinds of Subversion repositories at this writing: BDB 
      52  (Berkeley Database) and FSFS (a "filesystem within a filesystem").  Although 
      53  the repository type can be specified in configuration, that information is just 
      54  kept around for reference.  It doesn't affect the backup.  Both kinds of 
      55  repositories are backed up in the same way, using C{svnadmin dump} in an 
      56  incremental mode. 
      57   
      58  It turns out that FSFS repositories can also be backed up just like any 
      59  other filesystem directory.  If you would rather do that, then use the normal 
      60  collect action.  This is probably simpler, although it carries its own 
      61  advantages and disadvantages (plus you will have to be careful to exclude 
      62  the working directories Subversion uses when building an update to commit). 
      63  Check the Subversion documentation for more information. 
      64   
      65  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      66  """ 
      67   
      68  ######################################################################## 
      69  # Imported modules 
      70  ######################################################################## 
      71   
      72  # System modules 
      73  import os 
      74  import logging 
      75  import pickle 
      76  from bz2 import BZ2File 
      77  from gzip import GzipFile 
      78   
      79  # Cedar Backup modules 
      80  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode 
      81  from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList 
      82  from CedarBackup2.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES 
      83  from CedarBackup2.filesystem import FilesystemList 
      84  from CedarBackup2.util import UnorderedList, RegexList 
      85  from CedarBackup2.util import isStartOfWeek, buildNormalizedPath 
      86  from CedarBackup2.util import resolveCommand, executeCommand 
      87  from CedarBackup2.util import ObjectTypeList, encodePath, changeOwnership 
      88   
      89   
      90  ######################################################################## 
      91  # Module-wide constants and variables 
      92  ######################################################################## 
      93   
      94  logger = logging.getLogger("CedarBackup2.log.extend.subversion") 
      95   
      96  SVNLOOK_COMMAND      = [ "svnlook", ] 
      97  SVNADMIN_COMMAND     = [ "svnadmin", ] 
      98   
      99  REVISION_PATH_EXTENSION = "svnlast" 
    
    100 101 102 ######################################################################## 103 # RepositoryDir class definition 104 ######################################################################## 105 106 -class RepositoryDir(object):
    107 108 """ 109 Class representing Subversion repository directory. 110 111 A repository directory is a directory that contains one or more Subversion 112 repositories. 113 114 The following restrictions exist on data in this class: 115 116 - The directory path must be absolute. 117 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 118 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 119 120 The repository type value is kept around just for reference. It doesn't 121 affect the behavior of the backup. 122 123 Relative exclusions are allowed here. However, there is no configured 124 ignore file, because repository dir backups are not recursive. 125 126 @sort: __init__, __repr__, __str__, __cmp__, directoryPath, collectMode, compressMode 127 """ 128
    129 - def __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, 130 relativeExcludePaths=None, excludePatterns=None):
    131 """ 132 Constructor for the C{RepositoryDir} class. 133 134 @param repositoryType: Type of repository, for reference 135 @param directoryPath: Absolute path of the Subversion parent directory 136 @param collectMode: Overridden collect mode for this directory. 137 @param compressMode: Overridden compression mode for this directory. 138 @param relativeExcludePaths: List of relative paths to exclude. 139 @param excludePatterns: List of regular expression patterns to exclude 140 """ 141 self._repositoryType = None 142 self._directoryPath = None 143 self._collectMode = None 144 self._compressMode = None 145 self._relativeExcludePaths = None 146 self._excludePatterns = None 147 self.repositoryType = repositoryType 148 self.directoryPath = directoryPath 149 self.collectMode = collectMode 150 self.compressMode = compressMode 151 self.relativeExcludePaths = relativeExcludePaths 152 self.excludePatterns = excludePatterns
    153
    154 - def __repr__(self):
    155 """ 156 Official string representation for class instance. 157 """ 158 return "RepositoryDir(%s, %s, %s, %s, %s, %s)" % (self.repositoryType, self.directoryPath, self.collectMode, 159 self.compressMode, self.relativeExcludePaths, self.excludePatterns)
    160
    161 - def __str__(self):
    162 """ 163 Informal string representation for class instance. 164 """ 165 return self.__repr__()
    166
    167 - def __cmp__(self, other):
    168 """ 169 Definition of equals operator for this class. 170 @param other: Other object to compare to. 171 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 172 """ 173 if other is None: 174 return 1 175 if self.repositoryType != other.repositoryType: 176 if self.repositoryType < other.repositoryType: 177 return -1 178 else: 179 return 1 180 if self.directoryPath != other.directoryPath: 181 if self.directoryPath < other.directoryPath: 182 return -1 183 else: 184 return 1 185 if self.collectMode != other.collectMode: 186 if self.collectMode < other.collectMode: 187 return -1 188 else: 189 return 1 190 if self.compressMode != other.compressMode: 191 if self.compressMode < other.compressMode: 192 return -1 193 else: 194 return 1 195 if self.relativeExcludePaths != other.relativeExcludePaths: 196 if self.relativeExcludePaths < other.relativeExcludePaths: 197 return -1 198 else: 199 return 1 200 if self.excludePatterns != other.excludePatterns: 201 if self.excludePatterns < other.excludePatterns: 202 return -1 203 else: 204 return 1 205 return 0
    206
    207 - def _setRepositoryType(self, value):
    208 """ 209 Property target used to set the repository type. 210 There is no validation; this value is kept around just for reference. 211 """ 212 self._repositoryType = value
    213
    214 - def _getRepositoryType(self):
    215 """ 216 Property target used to get the repository type. 217 """ 218 return self._repositoryType
    219
    220 - def _setDirectoryPath(self, value):
    221 """ 222 Property target used to set the directory path. 223 The value must be an absolute path if it is not C{None}. 224 It does not have to exist on disk at the time of assignment. 225 @raise ValueError: If the value is not an absolute path. 226 @raise ValueError: If the value cannot be encoded properly. 227 """ 228 if value is not None: 229 if not os.path.isabs(value): 230 raise ValueError("Repository path must be an absolute path.") 231 self._directoryPath = encodePath(value)
    232
    233 - def _getDirectoryPath(self):
    234 """ 235 Property target used to get the repository path. 236 """ 237 return self._directoryPath
    238
    239 - def _setCollectMode(self, value):
    240 """ 241 Property target used to set the collect mode. 242 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 243 @raise ValueError: If the value is not valid. 244 """ 245 if value is not None: 246 if value not in VALID_COLLECT_MODES: 247 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 248 self._collectMode = value
    249
    250 - def _getCollectMode(self):
    251 """ 252 Property target used to get the collect mode. 253 """ 254 return self._collectMode
    255
    256 - def _setCompressMode(self, value):
    257 """ 258 Property target used to set the compress mode. 259 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 260 @raise ValueError: If the value is not valid. 261 """ 262 if value is not None: 263 if value not in VALID_COMPRESS_MODES: 264 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 265 self._compressMode = value
    266
    267 - def _getCompressMode(self):
    268 """ 269 Property target used to get the compress mode. 270 """ 271 return self._compressMode
    272
    273 - def _setRelativeExcludePaths(self, value):
    274 """ 275 Property target used to set the relative exclude paths list. 276 Elements do not have to exist on disk at the time of assignment. 277 """ 278 if value is None: 279 self._relativeExcludePaths = None 280 else: 281 try: 282 saved = self._relativeExcludePaths 283 self._relativeExcludePaths = UnorderedList() 284 self._relativeExcludePaths.extend(value) 285 except Exception, e: 286 self._relativeExcludePaths = saved 287 raise e
    288
    289 - def _getRelativeExcludePaths(self):
    290 """ 291 Property target used to get the relative exclude paths list. 292 """ 293 return self._relativeExcludePaths
    294
    295 - def _setExcludePatterns(self, value):
    296 """ 297 Property target used to set the exclude patterns list. 298 """ 299 if value is None: 300 self._excludePatterns = None 301 else: 302 try: 303 saved = self._excludePatterns 304 self._excludePatterns = RegexList() 305 self._excludePatterns.extend(value) 306 except Exception, e: 307 self._excludePatterns = saved 308 raise e
    309
    310 - def _getExcludePatterns(self):
    311 """ 312 Property target used to get the exclude patterns list. 313 """ 314 return self._excludePatterns
    315 316 repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") 317 directoryPath = property(_getDirectoryPath, _setDirectoryPath, None, doc="Absolute path of the Subversion parent directory.") 318 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") 319 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.") 320 relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") 321 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.")
    322
    323 324 ######################################################################## 325 # Repository class definition 326 ######################################################################## 327 328 -class Repository(object):
    329 330 """ 331 Class representing generic Subversion repository configuration.. 332 333 The following restrictions exist on data in this class: 334 335 - The respository path must be absolute. 336 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 337 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 338 339 The repository type value is kept around just for reference. It doesn't 340 affect the behavior of the backup. 341 342 @sort: __init__, __repr__, __str__, __cmp__, repositoryPath, collectMode, compressMode 343 """ 344
    345 - def __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None):
    346 """ 347 Constructor for the C{Repository} class. 348 349 @param repositoryType: Type of repository, for reference 350 @param repositoryPath: Absolute path to a Subversion repository on disk. 351 @param collectMode: Overridden collect mode for this directory. 352 @param compressMode: Overridden compression mode for this directory. 353 """ 354 self._repositoryType = None 355 self._repositoryPath = None 356 self._collectMode = None 357 self._compressMode = None 358 self.repositoryType = repositoryType 359 self.repositoryPath = repositoryPath 360 self.collectMode = collectMode 361 self.compressMode = compressMode
    362
    363 - def __repr__(self):
    364 """ 365 Official string representation for class instance. 366 """ 367 return "Repository(%s, %s, %s, %s)" % (self.repositoryType, self.repositoryPath, self.collectMode, self.compressMode)
    368
    369 - def __str__(self):
    370 """ 371 Informal string representation for class instance. 372 """ 373 return self.__repr__()
    374
    375 - def __cmp__(self, other):
    376 """ 377 Definition of equals operator for this class. 378 @param other: Other object to compare to. 379 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 380 """ 381 if other is None: 382 return 1 383 if self.repositoryType != other.repositoryType: 384 if self.repositoryType < other.repositoryType: 385 return -1 386 else: 387 return 1 388 if self.repositoryPath != other.repositoryPath: 389 if self.repositoryPath < other.repositoryPath: 390 return -1 391 else: 392 return 1 393 if self.collectMode != other.collectMode: 394 if self.collectMode < other.collectMode: 395 return -1 396 else: 397 return 1 398 if self.compressMode != other.compressMode: 399 if self.compressMode < other.compressMode: 400 return -1 401 else: 402 return 1 403 return 0
    404
    405 - def _setRepositoryType(self, value):
    406 """ 407 Property target used to set the repository type. 408 There is no validation; this value is kept around just for reference. 409 """ 410 self._repositoryType = value
    411
    412 - def _getRepositoryType(self):
    413 """ 414 Property target used to get the repository type. 415 """ 416 return self._repositoryType
    417
    418 - def _setRepositoryPath(self, value):
    419 """ 420 Property target used to set the repository path. 421 The value must be an absolute path if it is not C{None}. 422 It does not have to exist on disk at the time of assignment. 423 @raise ValueError: If the value is not an absolute path. 424 @raise ValueError: If the value cannot be encoded properly. 425 """ 426 if value is not None: 427 if not os.path.isabs(value): 428 raise ValueError("Repository path must be an absolute path.") 429 self._repositoryPath = encodePath(value)
    430
    431 - def _getRepositoryPath(self):
    432 """ 433 Property target used to get the repository path. 434 """ 435 return self._repositoryPath
    436
    437 - def _setCollectMode(self, value):
    438 """ 439 Property target used to set the collect mode. 440 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 441 @raise ValueError: If the value is not valid. 442 """ 443 if value is not None: 444 if value not in VALID_COLLECT_MODES: 445 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 446 self._collectMode = value
    447
    448 - def _getCollectMode(self):
    449 """ 450 Property target used to get the collect mode. 451 """ 452 return self._collectMode
    453
    454 - def _setCompressMode(self, value):
    455 """ 456 Property target used to set the compress mode. 457 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 458 @raise ValueError: If the value is not valid. 459 """ 460 if value is not None: 461 if value not in VALID_COMPRESS_MODES: 462 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 463 self._compressMode = value
    464
    465 - def _getCompressMode(self):
    466 """ 467 Property target used to get the compress mode. 468 """ 469 return self._compressMode
    470 471 repositoryType = property(_getRepositoryType, _setRepositoryType, None, doc="Type of this repository, for reference.") 472 repositoryPath = property(_getRepositoryPath, _setRepositoryPath, None, doc="Path to the repository to collect.") 473 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this repository.") 474 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this repository.")
    475
    476 477 ######################################################################## 478 # SubversionConfig class definition 479 ######################################################################## 480 481 -class SubversionConfig(object):
    482 483 """ 484 Class representing Subversion configuration. 485 486 Subversion configuration is used for backing up Subversion repositories. 487 488 The following restrictions exist on data in this class: 489 490 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 491 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 492 - The repositories list must be a list of C{Repository} objects. 493 - The repositoryDirs list must be a list of C{RepositoryDir} objects. 494 495 For the two lists, validation is accomplished through the 496 L{util.ObjectTypeList} list implementation that overrides common list 497 methods and transparently ensures that each element has the correct type. 498 499 @note: Lists within this class are "unordered" for equality comparisons. 500 501 @sort: __init__, __repr__, __str__, __cmp__, collectMode, compressMode, repositories 502 """ 503
    504 - def __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None):
    505 """ 506 Constructor for the C{SubversionConfig} class. 507 508 @param collectMode: Default collect mode. 509 @param compressMode: Default compress mode. 510 @param repositories: List of Subversion repositories to back up. 511 @param repositoryDirs: List of Subversion parent directories to back up. 512 513 @raise ValueError: If one of the values is invalid. 514 """ 515 self._collectMode = None 516 self._compressMode = None 517 self._repositories = None 518 self._repositoryDirs = None 519 self.collectMode = collectMode 520 self.compressMode = compressMode 521 self.repositories = repositories 522 self.repositoryDirs = repositoryDirs
    523
    524 - def __repr__(self):
    525 """ 526 Official string representation for class instance. 527 """ 528 return "SubversionConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.repositories, self.repositoryDirs)
    529
    530 - def __str__(self):
    531 """ 532 Informal string representation for class instance. 533 """ 534 return self.__repr__()
    535
    536 - def __cmp__(self, other):
    537 """ 538 Definition of equals operator for this class. 539 Lists within this class are "unordered" for equality comparisons. 540 @param other: Other object to compare to. 541 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 542 """ 543 if other is None: 544 return 1 545 if self.collectMode != other.collectMode: 546 if self.collectMode < other.collectMode: 547 return -1 548 else: 549 return 1 550 if self.compressMode != other.compressMode: 551 if self.compressMode < other.compressMode: 552 return -1 553 else: 554 return 1 555 if self.repositories != other.repositories: 556 if self.repositories < other.repositories: 557 return -1 558 else: 559 return 1 560 if self.repositoryDirs != other.repositoryDirs: 561 if self.repositoryDirs < other.repositoryDirs: 562 return -1 563 else: 564 return 1 565 return 0
    566
    567 - def _setCollectMode(self, value):
    568 """ 569 Property target used to set the collect mode. 570 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 571 @raise ValueError: If the value is not valid. 572 """ 573 if value is not None: 574 if value not in VALID_COLLECT_MODES: 575 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 576 self._collectMode = value
    577
    578 - def _getCollectMode(self):
    579 """ 580 Property target used to get the collect mode. 581 """ 582 return self._collectMode
    583
    584 - def _setCompressMode(self, value):
    585 """ 586 Property target used to set the compress mode. 587 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 588 @raise ValueError: If the value is not valid. 589 """ 590 if value is not None: 591 if value not in VALID_COMPRESS_MODES: 592 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 593 self._compressMode = value
    594
    595 - def _getCompressMode(self):
    596 """ 597 Property target used to get the compress mode. 598 """ 599 return self._compressMode
    600
    601 - def _setRepositories(self, value):
    602 """ 603 Property target used to set the repositories list. 604 Either the value must be C{None} or each element must be a C{Repository}. 605 @raise ValueError: If the value is not a C{Repository} 606 """ 607 if value is None: 608 self._repositories = None 609 else: 610 try: 611 saved = self._repositories 612 self._repositories = ObjectTypeList(Repository, "Repository") 613 self._repositories.extend(value) 614 except Exception, e: 615 self._repositories = saved 616 raise e
    617
    618 - def _getRepositories(self):
    619 """ 620 Property target used to get the repositories list. 621 """ 622 return self._repositories
    623
    624 - def _setRepositoryDirs(self, value):
    625 """ 626 Property target used to set the repositoryDirs list. 627 Either the value must be C{None} or each element must be a C{Repository}. 628 @raise ValueError: If the value is not a C{Repository} 629 """ 630 if value is None: 631 self._repositoryDirs = None 632 else: 633 try: 634 saved = self._repositoryDirs 635 self._repositoryDirs = ObjectTypeList(RepositoryDir, "RepositoryDir") 636 self._repositoryDirs.extend(value) 637 except Exception, e: 638 self._repositoryDirs = saved 639 raise e
    640
    641 - def _getRepositoryDirs(self):
    642 """ 643 Property target used to get the repositoryDirs list. 644 """ 645 return self._repositoryDirs
    646 647 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") 648 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") 649 repositories = property(_getRepositories, _setRepositories, None, doc="List of Subversion repositories to back up.") 650 repositoryDirs = property(_getRepositoryDirs, _setRepositoryDirs, None, doc="List of Subversion parent directories to back up.")
    651
    652 653 ######################################################################## 654 # LocalConfig class definition 655 ######################################################################## 656 657 -class LocalConfig(object):
    658 659 """ 660 Class representing this extension's configuration document. 661 662 This is not a general-purpose configuration object like the main Cedar 663 Backup configuration object. Instead, it just knows how to parse and emit 664 Subversion-specific configuration values. Third parties who need to read 665 and write configuration related to this extension should access it through 666 the constructor, C{validate} and C{addConfig} methods. 667 668 @note: Lists within this class are "unordered" for equality comparisons. 669 670 @sort: __init__, __repr__, __str__, __cmp__, subversion, validate, addConfig 671 """ 672
    673 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    674 """ 675 Initializes a configuration object. 676 677 If you initialize the object without passing either C{xmlData} or 678 C{xmlPath} then configuration will be empty and will be invalid until it 679 is filled in properly. 680 681 No reference to the original XML data or original path is saved off by 682 this class. Once the data has been parsed (successfully or not) this 683 original information is discarded. 684 685 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 686 method will be called (with its default arguments) against configuration 687 after successfully parsing any passed-in XML. Keep in mind that even if 688 C{validate} is C{False}, it might not be possible to parse the passed-in 689 XML document if lower-level validations fail. 690 691 @note: It is strongly suggested that the C{validate} option always be set 692 to C{True} (the default) unless there is a specific need to read in 693 invalid configuration from disk. 694 695 @param xmlData: XML data representing configuration. 696 @type xmlData: String data. 697 698 @param xmlPath: Path to an XML file on disk. 699 @type xmlPath: Absolute path to a file on disk. 700 701 @param validate: Validate the document after parsing it. 702 @type validate: Boolean true/false. 703 704 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 705 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 706 @raise ValueError: If the parsed configuration document is not valid. 707 """ 708 self._subversion = None 709 self.subversion = None 710 if xmlData is not None and xmlPath is not None: 711 raise ValueError("Use either xmlData or xmlPath, but not both.") 712 if xmlData is not None: 713 self._parseXmlData(xmlData) 714 if validate: 715 self.validate() 716 elif xmlPath is not None: 717 xmlData = open(xmlPath).read() 718 self._parseXmlData(xmlData) 719 if validate: 720 self.validate()
    721
    722 - def __repr__(self):
    723 """ 724 Official string representation for class instance. 725 """ 726 return "LocalConfig(%s)" % (self.subversion)
    727
    728 - def __str__(self):
    729 """ 730 Informal string representation for class instance. 731 """ 732 return self.__repr__()
    733
    734 - def __cmp__(self, other):
    735 """ 736 Definition of equals operator for this class. 737 Lists within this class are "unordered" for equality comparisons. 738 @param other: Other object to compare to. 739 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 740 """ 741 if other is None: 742 return 1 743 if self.subversion != other.subversion: 744 if self.subversion < other.subversion: 745 return -1 746 else: 747 return 1 748 return 0
    749
    750 - def _setSubversion(self, value):
    751 """ 752 Property target used to set the subversion configuration value. 753 If not C{None}, the value must be a C{SubversionConfig} object. 754 @raise ValueError: If the value is not a C{SubversionConfig} 755 """ 756 if value is None: 757 self._subversion = None 758 else: 759 if not isinstance(value, SubversionConfig): 760 raise ValueError("Value must be a C{SubversionConfig} object.") 761 self._subversion = value
    762
    763 - def _getSubversion(self):
    764 """ 765 Property target used to get the subversion configuration value. 766 """ 767 return self._subversion
    768 769 subversion = property(_getSubversion, _setSubversion, None, "Subversion configuration in terms of a C{SubversionConfig} object.") 770
    771 - def validate(self):
    772 """ 773 Validates configuration represented by the object. 774 775 Subversion configuration must be filled in. Within that, the collect 776 mode and compress mode are both optional, but the list of repositories 777 must contain at least one entry. 778 779 Each repository must contain a repository path, and then must be either 780 able to take collect mode and compress mode configuration from the parent 781 C{SubversionConfig} object, or must set each value on its own. 782 783 @raise ValueError: If one of the validations fails. 784 """ 785 if self.subversion is None: 786 raise ValueError("Subversion section is required.") 787 if ((self.subversion.repositories is None or len(self.subversion.repositories) < 1) and 788 (self.subversion.repositoryDirs is None or len(self.subversion.repositoryDirs) <1)): 789 raise ValueError("At least one Subversion repository must be configured.") 790 if self.subversion.repositories is not None: 791 for repository in self.subversion.repositories: 792 if repository.repositoryPath is None: 793 raise ValueError("Each repository must set a repository path.") 794 if self.subversion.collectMode is None and repository.collectMode is None: 795 raise ValueError("Collect mode must either be set in parent section or individual repository.") 796 if self.subversion.compressMode is None and repository.compressMode is None: 797 raise ValueError("Compress mode must either be set in parent section or individual repository.") 798 if self.subversion.repositoryDirs is not None: 799 for repositoryDir in self.subversion.repositoryDirs: 800 if repositoryDir.directoryPath is None: 801 raise ValueError("Each repository directory must set a directory path.") 802 if self.subversion.collectMode is None and repositoryDir.collectMode is None: 803 raise ValueError("Collect mode must either be set in parent section or repository directory.") 804 if self.subversion.compressMode is None and repositoryDir.compressMode is None: 805 raise ValueError("Compress mode must either be set in parent section or repository directory.")
    806
    807 - def addConfig(self, xmlDom, parentNode):
    808 """ 809 Adds a <subversion> configuration section as the next child of a parent. 810 811 Third parties should use this function to write configuration related to 812 this extension. 813 814 We add the following fields to the document:: 815 816 collectMode //cb_config/subversion/collectMode 817 compressMode //cb_config/subversion/compressMode 818 819 We also add groups of the following items, one list element per 820 item:: 821 822 repository //cb_config/subversion/repository 823 repository_dir //cb_config/subversion/repository_dir 824 825 @param xmlDom: DOM tree as from C{impl.createDocument()}. 826 @param parentNode: Parent that the section should be appended to. 827 """ 828 if self.subversion is not None: 829 sectionNode = addContainerNode(xmlDom, parentNode, "subversion") 830 addStringNode(xmlDom, sectionNode, "collect_mode", self.subversion.collectMode) 831 addStringNode(xmlDom, sectionNode, "compress_mode", self.subversion.compressMode) 832 if self.subversion.repositories is not None: 833 for repository in self.subversion.repositories: 834 LocalConfig._addRepository(xmlDom, sectionNode, repository) 835 if self.subversion.repositoryDirs is not None: 836 for repositoryDir in self.subversion.repositoryDirs: 837 LocalConfig._addRepositoryDir(xmlDom, sectionNode, repositoryDir)
    838
    839 - def _parseXmlData(self, xmlData):
    840 """ 841 Internal method to parse an XML string into the object. 842 843 This method parses the XML document into a DOM tree (C{xmlDom}) and then 844 calls a static method to parse the subversion configuration section. 845 846 @param xmlData: XML data to be parsed 847 @type xmlData: String data 848 849 @raise ValueError: If the XML cannot be successfully parsed. 850 """ 851 (xmlDom, parentNode) = createInputDom(xmlData) 852 self._subversion = LocalConfig._parseSubversion(parentNode)
    853 854 @staticmethod
    855 - def _parseSubversion(parent):
    856 """ 857 Parses a subversion configuration section. 858 859 We read the following individual fields:: 860 861 collectMode //cb_config/subversion/collect_mode 862 compressMode //cb_config/subversion/compress_mode 863 864 We also read groups of the following item, one list element per 865 item:: 866 867 repositories //cb_config/subversion/repository 868 repository_dirs //cb_config/subversion/repository_dir 869 870 The repositories are parsed by L{_parseRepositories}, and the repository 871 dirs are parsed by L{_parseRepositoryDirs}. 872 873 @param parent: Parent node to search beneath. 874 875 @return: C{SubversionConfig} object or C{None} if the section does not exist. 876 @raise ValueError: If some filled-in value is invalid. 877 """ 878 subversion = None 879 section = readFirstChild(parent, "subversion") 880 if section is not None: 881 subversion = SubversionConfig() 882 subversion.collectMode = readString(section, "collect_mode") 883 subversion.compressMode = readString(section, "compress_mode") 884 subversion.repositories = LocalConfig._parseRepositories(section) 885 subversion.repositoryDirs = LocalConfig._parseRepositoryDirs(section) 886 return subversion
    887 888 @staticmethod
    889 - def _parseRepositories(parent):
    890 """ 891 Reads a list of C{Repository} objects from immediately beneath the parent. 892 893 We read the following individual fields:: 894 895 repositoryType type 896 repositoryPath abs_path 897 collectMode collect_mode 898 compressMode compess_mode 899 900 The type field is optional, and its value is kept around only for 901 reference. 902 903 @param parent: Parent node to search beneath. 904 905 @return: List of C{Repository} objects or C{None} if none are found. 906 @raise ValueError: If some filled-in value is invalid. 907 """ 908 lst = [] 909 for entry in readChildren(parent, "repository"): 910 if isElement(entry): 911 repository = Repository() 912 repository.repositoryType = readString(entry, "type") 913 repository.repositoryPath = readString(entry, "abs_path") 914 repository.collectMode = readString(entry, "collect_mode") 915 repository.compressMode = readString(entry, "compress_mode") 916 lst.append(repository) 917 if lst == []: 918 lst = None 919 return lst
    920 921 @staticmethod
    922 - def _addRepository(xmlDom, parentNode, repository):
    923 """ 924 Adds a repository container as the next child of a parent. 925 926 We add the following fields to the document:: 927 928 repositoryType repository/type 929 repositoryPath repository/abs_path 930 collectMode repository/collect_mode 931 compressMode repository/compress_mode 932 933 The <repository> node itself is created as the next child of the parent 934 node. This method only adds one repository node. The parent must loop 935 for each repository in the C{SubversionConfig} object. 936 937 If C{repository} is C{None}, this method call will be a no-op. 938 939 @param xmlDom: DOM tree as from C{impl.createDocument()}. 940 @param parentNode: Parent that the section should be appended to. 941 @param repository: Repository to be added to the document. 942 """ 943 if repository is not None: 944 sectionNode = addContainerNode(xmlDom, parentNode, "repository") 945 addStringNode(xmlDom, sectionNode, "type", repository.repositoryType) 946 addStringNode(xmlDom, sectionNode, "abs_path", repository.repositoryPath) 947 addStringNode(xmlDom, sectionNode, "collect_mode", repository.collectMode) 948 addStringNode(xmlDom, sectionNode, "compress_mode", repository.compressMode)
    949 950 @staticmethod
    951 - def _parseRepositoryDirs(parent):
    952 """ 953 Reads a list of C{RepositoryDir} objects from immediately beneath the parent. 954 955 We read the following individual fields:: 956 957 repositoryType type 958 directoryPath abs_path 959 collectMode collect_mode 960 compressMode compess_mode 961 962 We also read groups of the following items, one list element per 963 item:: 964 965 relativeExcludePaths exclude/rel_path 966 excludePatterns exclude/pattern 967 968 The exclusions are parsed by L{_parseExclusions}. 969 970 The type field is optional, and its value is kept around only for 971 reference. 972 973 @param parent: Parent node to search beneath. 974 975 @return: List of C{RepositoryDir} objects or C{None} if none are found. 976 @raise ValueError: If some filled-in value is invalid. 977 """ 978 lst = [] 979 for entry in readChildren(parent, "repository_dir"): 980 if isElement(entry): 981 repositoryDir = RepositoryDir() 982 repositoryDir.repositoryType = readString(entry, "type") 983 repositoryDir.directoryPath = readString(entry, "abs_path") 984 repositoryDir.collectMode = readString(entry, "collect_mode") 985 repositoryDir.compressMode = readString(entry, "compress_mode") 986 (repositoryDir.relativeExcludePaths, repositoryDir.excludePatterns) = LocalConfig._parseExclusions(entry) 987 lst.append(repositoryDir) 988 if lst == []: 989 lst = None 990 return lst
    991 992 @staticmethod
    993 - def _parseExclusions(parentNode):
    994 """ 995 Reads exclusions data from immediately beneath the parent. 996 997 We read groups of the following items, one list element per item:: 998 999 relative exclude/rel_path 1000 patterns exclude/pattern 1001 1002 If there are none of some pattern (i.e. no relative path items) then 1003 C{None} will be returned for that item in the tuple. 1004 1005 @param parentNode: Parent node to search beneath. 1006 1007 @return: Tuple of (relative, patterns) exclusions. 1008 """ 1009 section = readFirstChild(parentNode, "exclude") 1010 if section is None: 1011 return (None, None) 1012 else: 1013 relative = readStringList(section, "rel_path") 1014 patterns = readStringList(section, "pattern") 1015 return (relative, patterns)
    1016 1017 @staticmethod
    1018 - def _addRepositoryDir(xmlDom, parentNode, repositoryDir):
    1019 """ 1020 Adds a repository dir container as the next child of a parent. 1021 1022 We add the following fields to the document:: 1023 1024 repositoryType repository_dir/type 1025 directoryPath repository_dir/abs_path 1026 collectMode repository_dir/collect_mode 1027 compressMode repository_dir/compress_mode 1028 1029 We also add groups of the following items, one list element per item:: 1030 1031 relativeExcludePaths dir/exclude/rel_path 1032 excludePatterns dir/exclude/pattern 1033 1034 The <repository_dir> node itself is created as the next child of the 1035 parent node. This method only adds one repository node. The parent must 1036 loop for each repository dir in the C{SubversionConfig} object. 1037 1038 If C{repositoryDir} is C{None}, this method call will be a no-op. 1039 1040 @param xmlDom: DOM tree as from C{impl.createDocument()}. 1041 @param parentNode: Parent that the section should be appended to. 1042 @param repositoryDir: Repository dir to be added to the document. 1043 """ 1044 if repositoryDir is not None: 1045 sectionNode = addContainerNode(xmlDom, parentNode, "repository_dir") 1046 addStringNode(xmlDom, sectionNode, "type", repositoryDir.repositoryType) 1047 addStringNode(xmlDom, sectionNode, "abs_path", repositoryDir.directoryPath) 1048 addStringNode(xmlDom, sectionNode, "collect_mode", repositoryDir.collectMode) 1049 addStringNode(xmlDom, sectionNode, "compress_mode", repositoryDir.compressMode) 1050 if ((repositoryDir.relativeExcludePaths is not None and repositoryDir.relativeExcludePaths != []) or 1051 (repositoryDir.excludePatterns is not None and repositoryDir.excludePatterns != [])): 1052 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 1053 if repositoryDir.relativeExcludePaths is not None: 1054 for relativePath in repositoryDir.relativeExcludePaths: 1055 addStringNode(xmlDom, excludeNode, "rel_path", relativePath) 1056 if repositoryDir.excludePatterns is not None: 1057 for pattern in repositoryDir.excludePatterns: 1058 addStringNode(xmlDom, excludeNode, "pattern", pattern)
    1059
    1060 1061 ######################################################################## 1062 # Public functions 1063 ######################################################################## 1064 1065 ########################### 1066 # executeAction() function 1067 ########################### 1068 1069 -def executeAction(configPath, options, config):
    1070 """ 1071 Executes the Subversion backup action. 1072 1073 @param configPath: Path to configuration file on disk. 1074 @type configPath: String representing a path on disk. 1075 1076 @param options: Program command-line options. 1077 @type options: Options object. 1078 1079 @param config: Program configuration. 1080 @type config: Config object. 1081 1082 @raise ValueError: Under many generic error conditions 1083 @raise IOError: If a backup could not be written for some reason. 1084 """ 1085 logger.debug("Executing Subversion extended action.") 1086 if config.options is None or config.collect is None: 1087 raise ValueError("Cedar Backup configuration is not properly filled in.") 1088 local = LocalConfig(xmlPath=configPath) 1089 todayIsStart = isStartOfWeek(config.options.startingDay) 1090 fullBackup = options.full or todayIsStart 1091 logger.debug("Full backup flag is [%s]", fullBackup) 1092 if local.subversion.repositories is not None: 1093 for repository in local.subversion.repositories: 1094 _backupRepository(config, local, todayIsStart, fullBackup, repository) 1095 if local.subversion.repositoryDirs is not None: 1096 for repositoryDir in local.subversion.repositoryDirs: 1097 logger.debug("Working with repository directory [%s].", repositoryDir.directoryPath) 1098 for repositoryPath in _getRepositoryPaths(repositoryDir): 1099 repository = Repository(repositoryDir.repositoryType, repositoryPath, 1100 repositoryDir.collectMode, repositoryDir.compressMode) 1101 _backupRepository(config, local, todayIsStart, fullBackup, repository) 1102 logger.info("Completed backing up Subversion repository directory [%s].", repositoryDir.directoryPath) 1103 logger.info("Executed the Subversion extended action successfully.")
    1104
    1105 -def _getCollectMode(local, repository):
    1106 """ 1107 Gets the collect mode that should be used for a repository. 1108 Use repository's if possible, otherwise take from subversion section. 1109 @param repository: Repository object. 1110 @return: Collect mode to use. 1111 """ 1112 if repository.collectMode is None: 1113 collectMode = local.subversion.collectMode 1114 else: 1115 collectMode = repository.collectMode 1116 logger.debug("Collect mode is [%s]", collectMode) 1117 return collectMode
    1118
    1119 -def _getCompressMode(local, repository):
    1120 """ 1121 Gets the compress mode that should be used for a repository. 1122 Use repository's if possible, otherwise take from subversion section. 1123 @param local: LocalConfig object. 1124 @param repository: Repository object. 1125 @return: Compress mode to use. 1126 """ 1127 if repository.compressMode is None: 1128 compressMode = local.subversion.compressMode 1129 else: 1130 compressMode = repository.compressMode 1131 logger.debug("Compress mode is [%s]", compressMode) 1132 return compressMode
    1133
    1134 -def _getRevisionPath(config, repository):
    1135 """ 1136 Gets the path to the revision file associated with a repository. 1137 @param config: Config object. 1138 @param repository: Repository object. 1139 @return: Absolute path to the revision file associated with the repository. 1140 """ 1141 normalized = buildNormalizedPath(repository.repositoryPath) 1142 filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) 1143 revisionPath = os.path.join(config.options.workingDir, filename) 1144 logger.debug("Revision file path is [%s]", revisionPath) 1145 return revisionPath
    1146
    1147 -def _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision):
    1148 """ 1149 Gets the backup file path (including correct extension) associated with a repository. 1150 @param config: Config object. 1151 @param repositoryPath: Path to the indicated repository 1152 @param compressMode: Compress mode to use for this repository. 1153 @param startRevision: Starting repository revision. 1154 @param endRevision: Ending repository revision. 1155 @return: Absolute path to the backup file associated with the repository. 1156 """ 1157 normalizedPath = buildNormalizedPath(repositoryPath) 1158 filename = "svndump-%d:%d-%s.txt" % (startRevision, endRevision, normalizedPath) 1159 if compressMode == 'gzip': 1160 filename = "%s.gz" % filename 1161 elif compressMode == 'bzip2': 1162 filename = "%s.bz2" % filename 1163 backupPath = os.path.join(config.collect.targetDir, filename) 1164 logger.debug("Backup file path is [%s]", backupPath) 1165 return backupPath
    1166
    1167 -def _getRepositoryPaths(repositoryDir):
    1168 """ 1169 Gets a list of child repository paths within a repository directory. 1170 @param repositoryDir: RepositoryDirectory 1171 """ 1172 (excludePaths, excludePatterns) = _getExclusions(repositoryDir) 1173 fsList = FilesystemList() 1174 fsList.excludeFiles = True 1175 fsList.excludeLinks = True 1176 fsList.excludePaths = excludePaths 1177 fsList.excludePatterns = excludePatterns 1178 fsList.addDirContents(path=repositoryDir.directoryPath, recursive=False, addSelf=False) 1179 return fsList
    1180
    1181 -def _getExclusions(repositoryDir):
    1182 """ 1183 Gets exclusions (file and patterns) associated with an repository directory. 1184 1185 The returned files value is a list of absolute paths to be excluded from the 1186 backup for a given directory. It is derived from the repository directory's 1187 relative exclude paths. 1188 1189 The returned patterns value is a list of patterns to be excluded from the 1190 backup for a given directory. It is derived from the repository directory's 1191 list of patterns. 1192 1193 @param repositoryDir: Repository directory object. 1194 1195 @return: Tuple (files, patterns) indicating what to exclude. 1196 """ 1197 paths = [] 1198 if repositoryDir.relativeExcludePaths is not None: 1199 for relativePath in repositoryDir.relativeExcludePaths: 1200 paths.append(os.path.join(repositoryDir.directoryPath, relativePath)) 1201 patterns = [] 1202 if repositoryDir.excludePatterns is not None: 1203 patterns.extend(repositoryDir.excludePatterns) 1204 logger.debug("Exclude paths: %s", paths) 1205 logger.debug("Exclude patterns: %s", patterns) 1206 return(paths, patterns)
    1207
    1208 -def _backupRepository(config, local, todayIsStart, fullBackup, repository):
    1209 """ 1210 Backs up an individual Subversion repository. 1211 1212 This internal method wraps the public methods and adds some functionality 1213 to work better with the extended action itself. 1214 1215 @param config: Cedar Backup configuration. 1216 @param local: Local configuration 1217 @param todayIsStart: Indicates whether today is start of week 1218 @param fullBackup: Full backup flag 1219 @param repository: Repository to operate on 1220 1221 @raise ValueError: If some value is missing or invalid. 1222 @raise IOError: If there is a problem executing the Subversion dump. 1223 """ 1224 logger.debug("Working with repository [%s]", repository.repositoryPath) 1225 logger.debug("Repository type is [%s]", repository.repositoryType) 1226 collectMode = _getCollectMode(local, repository) 1227 compressMode = _getCompressMode(local, repository) 1228 revisionPath = _getRevisionPath(config, repository) 1229 if not (fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart)): 1230 logger.debug("Repository will not be backed up, per collect mode.") 1231 return 1232 logger.debug("Repository meets criteria to be backed up today.") 1233 if collectMode != "incr" or fullBackup: 1234 startRevision = 0 1235 endRevision = getYoungestRevision(repository.repositoryPath) 1236 logger.debug("Using full backup, revision: (%d, %d).", startRevision, endRevision) 1237 else: 1238 if fullBackup: 1239 startRevision = 0 1240 endRevision = getYoungestRevision(repository.repositoryPath) 1241 else: 1242 startRevision = _loadLastRevision(revisionPath) + 1 1243 endRevision = getYoungestRevision(repository.repositoryPath) 1244 if startRevision > endRevision: 1245 logger.info("No need to back up repository [%s]; no new revisions.", repository.repositoryPath) 1246 return 1247 logger.debug("Using incremental backup, revision: (%d, %d).", startRevision, endRevision) 1248 backupPath = _getBackupPath(config, repository.repositoryPath, compressMode, startRevision, endRevision) 1249 outputFile = _getOutputFile(backupPath, compressMode) 1250 try: 1251 backupRepository(repository.repositoryPath, outputFile, startRevision, endRevision) 1252 finally: 1253 outputFile.close() 1254 if not os.path.exists(backupPath): 1255 raise IOError("Dump file [%s] does not seem to exist after backup completed." % backupPath) 1256 changeOwnership(backupPath, config.options.backupUser, config.options.backupGroup) 1257 if collectMode == "incr": 1258 _writeLastRevision(config, revisionPath, endRevision) 1259 logger.info("Completed backing up Subversion repository [%s].", repository.repositoryPath)
    1260
    1261 -def _getOutputFile(backupPath, compressMode):
    1262 """ 1263 Opens the output file used for saving the Subversion dump. 1264 1265 If the compress mode is "gzip", we'll open a C{GzipFile}, and if the 1266 compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just 1267 return an object from the normal C{open()} method. 1268 1269 @param backupPath: Path to file to open. 1270 @param compressMode: Compress mode of file ("none", "gzip", "bzip"). 1271 1272 @return: Output file object. 1273 """ 1274 if compressMode == "gzip": 1275 return GzipFile(backupPath, "w") 1276 elif compressMode == "bzip2": 1277 return BZ2File(backupPath, "w") 1278 else: 1279 return open(backupPath, "w")
    1280
    1281 -def _loadLastRevision(revisionPath):
    1282 """ 1283 Loads the indicated revision file from disk into an integer. 1284 1285 If we can't load the revision file successfully (either because it doesn't 1286 exist or for some other reason), then a revision of -1 will be returned - 1287 but the condition will be logged. This way, we err on the side of backing 1288 up too much, because anyone using this will presumably be adding 1 to the 1289 revision, so they don't duplicate any backups. 1290 1291 @param revisionPath: Path to the revision file on disk. 1292 1293 @return: Integer representing last backed-up revision, -1 on error or if none can be read. 1294 """ 1295 if not os.path.isfile(revisionPath): 1296 startRevision = -1 1297 logger.debug("Revision file [%s] does not exist on disk.", revisionPath) 1298 else: 1299 try: 1300 startRevision = pickle.load(open(revisionPath, "r")) 1301 logger.debug("Loaded revision file [%s] from disk: %d.", revisionPath, startRevision) 1302 except: 1303 startRevision = -1 1304 logger.error("Failed loading revision file [%s] from disk.", revisionPath) 1305 return startRevision
    1306
    1307 -def _writeLastRevision(config, revisionPath, endRevision):
    1308 """ 1309 Writes the end revision to the indicated revision file on disk. 1310 1311 If we can't write the revision file successfully for any reason, we'll log 1312 the condition but won't throw an exception. 1313 1314 @param config: Config object. 1315 @param revisionPath: Path to the revision file on disk. 1316 @param endRevision: Last revision backed up on this run. 1317 """ 1318 try: 1319 pickle.dump(endRevision, open(revisionPath, "w")) 1320 changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) 1321 logger.debug("Wrote new revision file [%s] to disk: %d.", revisionPath, endRevision) 1322 except: 1323 logger.error("Failed to write revision file [%s] to disk.", revisionPath)
    1324
    1325 1326 ############################## 1327 # backupRepository() function 1328 ############################## 1329 1330 -def backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None):
    1331 """ 1332 Backs up an individual Subversion repository. 1333 1334 The starting and ending revision values control an incremental backup. If 1335 the starting revision is not passed in, then revision zero (the start of the 1336 repository) is assumed. If the ending revision is not passed in, then the 1337 youngest revision in the database will be used as the endpoint. 1338 1339 The backup data will be written into the passed-in back file. Normally, 1340 this would be an object as returned from C{open}, but it is possible to use 1341 something like a C{GzipFile} to write compressed output. The caller is 1342 responsible for closing the passed-in backup file. 1343 1344 @note: This function should either be run as root or as the owner of the 1345 Subversion repository. 1346 1347 @note: It is apparently I{not} a good idea to interrupt this function. 1348 Sometimes, this leaves the repository in a "wedged" state, which requires 1349 recovery using C{svnadmin recover}. 1350 1351 @param repositoryPath: Path to Subversion repository to back up 1352 @type repositoryPath: String path representing Subversion repository on disk. 1353 1354 @param backupFile: Python file object to use for writing backup. 1355 @type backupFile: Python file object as from C{open()} or C{file()}. 1356 1357 @param startRevision: Starting repository revision to back up (for incremental backups) 1358 @type startRevision: Integer value >= 0. 1359 1360 @param endRevision: Ending repository revision to back up (for incremental backups) 1361 @type endRevision: Integer value >= 0. 1362 1363 @raise ValueError: If some value is missing or invalid. 1364 @raise IOError: If there is a problem executing the Subversion dump. 1365 """ 1366 if startRevision is None: 1367 startRevision = 0 1368 if endRevision is None: 1369 endRevision = getYoungestRevision(repositoryPath) 1370 if int(startRevision) < 0: 1371 raise ValueError("Start revision must be >= 0.") 1372 if int(endRevision) < 0: 1373 raise ValueError("End revision must be >= 0.") 1374 if startRevision > endRevision: 1375 raise ValueError("Start revision must be <= end revision.") 1376 args = [ "dump", "--quiet", "-r%s:%s" % (startRevision, endRevision), "--incremental", repositoryPath, ] 1377 command = resolveCommand(SVNADMIN_COMMAND) 1378 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] 1379 if result != 0: 1380 raise IOError("Error [%d] executing Subversion dump for repository [%s]." % (result, repositoryPath)) 1381 logger.debug("Completed dumping subversion repository [%s].", repositoryPath)
    1382
    1383 1384 ################################# 1385 # getYoungestRevision() function 1386 ################################# 1387 1388 -def getYoungestRevision(repositoryPath):
    1389 """ 1390 Gets the youngest (newest) revision in a Subversion repository using C{svnlook}. 1391 1392 @note: This function should either be run as root or as the owner of the 1393 Subversion repository. 1394 1395 @param repositoryPath: Path to Subversion repository to look in. 1396 @type repositoryPath: String path representing Subversion repository on disk. 1397 1398 @return: Youngest revision as an integer. 1399 1400 @raise ValueError: If there is a problem parsing the C{svnlook} output. 1401 @raise IOError: If there is a problem executing the C{svnlook} command. 1402 """ 1403 args = [ 'youngest', repositoryPath, ] 1404 command = resolveCommand(SVNLOOK_COMMAND) 1405 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 1406 if result != 0: 1407 raise IOError("Error [%d] executing 'svnlook youngest' for repository [%s]." % (result, repositoryPath)) 1408 if len(output) != 1: 1409 raise ValueError("Unable to parse 'svnlook youngest' output.") 1410 return int(output[0])
    1411
    1412 1413 ######################################################################## 1414 # Deprecated functionality 1415 ######################################################################## 1416 1417 -class BDBRepository(Repository):
    1418 1419 """ 1420 Class representing Subversion BDB (Berkeley Database) repository configuration. 1421 This object is deprecated. Use a simple L{Repository} instead. 1422 """ 1423
    1424 - def __init__(self, repositoryPath=None, collectMode=None, compressMode=None):
    1425 """ 1426 Constructor for the C{BDBRepository} class. 1427 """ 1428 super(BDBRepository, self).__init__("BDB", repositoryPath, collectMode, compressMode)
    1429
    1430 - def __repr__(self):
    1431 """ 1432 Official string representation for class instance. 1433 """ 1434 return "BDBRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode)
    1435
    1436 1437 -class FSFSRepository(Repository):
    1438 1439 """ 1440 Class representing Subversion FSFS repository configuration. 1441 This object is deprecated. Use a simple L{Repository} instead. 1442 """ 1443
    1444 - def __init__(self, repositoryPath=None, collectMode=None, compressMode=None):
    1445 """ 1446 Constructor for the C{FSFSRepository} class. 1447 """ 1448 super(FSFSRepository, self).__init__("FSFS", repositoryPath, collectMode, compressMode)
    1449
    1450 - def __repr__(self):
    1451 """ 1452 Official string representation for class instance. 1453 """ 1454 return "FSFSRepository(%s, %s, %s)" % (self.repositoryPath, self.collectMode, self.compressMode)
    1455
    1456 1457 -def backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None):
    1458 """ 1459 Backs up an individual Subversion BDB repository. 1460 This function is deprecated. Use L{backupRepository} instead. 1461 """ 1462 return backupRepository(repositoryPath, backupFile, startRevision, endRevision)
    1463
    1464 1465 -def backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None):
    1466 """ 1467 Backs up an individual Subversion FSFS repository. 1468 This function is deprecated. Use L{backupRepository} instead. 1469 """ 1470 return backupRepository(repositoryPath, backupFile, startRevision, endRevision)
    1471

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.xmlutil-module.html0000664000175000017500000015554412642035643026402 0ustar pronovicpronovic00000000000000 CedarBackup2.xmlutil
    Package CedarBackup2 :: Module xmlutil
    [hide private]
    [frames] | no frames]

    Module xmlutil

    source code

    Provides general XML-related functionality.

    What I'm trying to do here is abstract much of the functionality that directly accesses the DOM tree. This is not so much to "protect" the other code from the DOM, but to standardize the way it's used. It will also help extension authors write code that easily looks more like the rest of Cedar Backup.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      Serializer
    XML serializer class.
    Functions [hide private]
     
    createInputDom(xmlData, name='cb_config')
    Creates a DOM tree based on reading an XML string.
    source code
     
    createOutputDom(name='cb_config')
    Creates a DOM tree used for writing an XML document.
    source code
     
    serializeDom(xmlDom, indent=3)
    Serializes a DOM tree and returns the result in a string.
    source code
     
    isElement(node)
    Returns True or False depending on whether the XML node is an element node.
    source code
     
    readChildren(parent, name)
    Returns a list of nodes with a given name immediately beneath the parent.
    source code
     
    readFirstChild(parent, name)
    Returns the first child with a given name immediately beneath the parent.
    source code
     
    readStringList(parent, name)
    Returns a list of the string contents associated with nodes with a given name immediately beneath the parent.
    source code
     
    readString(parent, name)
    Returns string contents of the first child with a given name immediately beneath the parent.
    source code
     
    readInteger(parent, name)
    Returns integer contents of the first child with a given name immediately beneath the parent.
    source code
     
    readBoolean(parent, name)
    Returns boolean contents of the first child with a given name immediately beneath the parent.
    source code
     
    addContainerNode(xmlDom, parentNode, nodeName)
    Adds a container node as the next child of a parent node.
    source code
     
    addStringNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain a string.
    source code
     
    addIntegerNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain an integer.
    source code
     
    addBooleanNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain a boolean.
    source code
     
    readLong(parent, name)
    Returns long integer contents of the first child with a given name immediately beneath the parent.
    source code
     
    readFloat(parent, name)
    Returns float contents of the first child with a given name immediately beneath the parent.
    source code
     
    addLongNode(xmlDom, parentNode, nodeName, nodeValue)
    Adds a text node as the next child of a parent, to contain a long integer.
    source code
     
    _encodeText(text, encoding) source code
     
    _translateCDATAAttr(characters)
    Handles normalization and some intelligence about quoting.
    source code
     
    _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0) source code
    Variables [hide private]
      TRUE_BOOLEAN_VALUES = ['Y', 'y']
    List of boolean values in XML representing True.
      FALSE_BOOLEAN_VALUES = ['N', 'n']
    List of boolean values in XML representing False.
      VALID_BOOLEAN_VALUES = ['Y', 'y', 'N', 'n']
    List of valid boolean values in XML.
      logger = logging.getLogger("CedarBackup2.log.xml")
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    createInputDom(xmlData, name='cb_config')

    source code 

    Creates a DOM tree based on reading an XML string.

    Parameters:
    • name - Assumed base name of the document (root node name).
    Returns:
    Tuple (xmlDom, parentNode) for the parsed document
    Raises:
    • ValueError - If the document can't be parsed.

    createOutputDom(name='cb_config')

    source code 

    Creates a DOM tree used for writing an XML document.

    Parameters:
    • name - Base name of the document (root node name).
    Returns:
    Tuple (xmlDom, parentNode) for the new document

    serializeDom(xmlDom, indent=3)

    source code 

    Serializes a DOM tree and returns the result in a string.

    Parameters:
    • xmlDom - XML DOM tree to serialize
    • indent - Number of spaces to indent, as an integer
    Returns:
    String form of DOM tree, pretty-printed.

    readChildren(parent, name)

    source code 

    Returns a list of nodes with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Underneath, we use the Python getElementsByTagName method, which is pretty cool, but which (surprisingly?) returns a list of all children with a given name below the parent, at any level. We just prune that list to include only children whose parentNode matches the passed-in parent.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of nodes to search for.
    Returns:
    List of child nodes with correct parent, or an empty list if no matching nodes are found.

    readFirstChild(parent, name)

    source code 

    Returns the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    First properly-named child of parent, or None if no matching nodes are found.

    readStringList(parent, name)

    source code 

    Returns a list of the string contents associated with nodes with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    First, we find all of the nodes using readChildren, and then we retrieve the "string contents" of each of those nodes. The returned list has one entry per matching node. We assume that string contents of a given node belong to the first TEXT_NODE child of that node. Nodes which have no TEXT_NODE children are not represented in the returned list.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    List of strings as described above, or None if no matching nodes are found.

    readString(parent, name)

    source code 

    Returns string contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node. We assume that string contents of a given node belong to the first TEXT_NODE child of that node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    String contents of node or None if no matching nodes are found.

    readInteger(parent, name)

    source code 

    Returns integer contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Integer contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to an integer.

    readBoolean(parent, name)

    source code 

    Returns boolean contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    The string value of the node must be one of the values in VALID_BOOLEAN_VALUES.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Boolean contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to a boolean.

    addContainerNode(xmlDom, parentNode, nodeName)

    source code 

    Adds a container node as the next child of a parent node.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    Returns:
    Reference to the newly-created node.

    addStringNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain a string.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    addIntegerNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain an integer.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    The integer will be converted to a string using "%d". The result will be added to the document via addStringNode.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    addBooleanNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain a boolean.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    Boolean True, or anything else interpreted as True by Python, will be converted to a string "Y". Anything else will be converted to a string "N". The result is added to the document via addStringNode.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    readLong(parent, name)

    source code 

    Returns long integer contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Long integer contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to an integer.

    readFloat(parent, name)

    source code 

    Returns float contents of the first child with a given name immediately beneath the parent.

    By "immediately beneath" the parent, we mean from among nodes that are direct children of the passed-in parent node.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Float contents of node or None if no matching nodes are found.
    Raises:
    • ValueError - If the string at the location can't be converted to a float value.

    addLongNode(xmlDom, parentNode, nodeName, nodeValue)

    source code 

    Adds a text node as the next child of a parent, to contain a long integer.

    If the nodeValue is None, then the node will be created, but will be empty (i.e. will contain no text node child).

    The integer will be converted to a string using "%d". The result will be added to the document via addStringNode.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • nodeValue - The value to put into the node.
    Returns:
    Reference to the newly-created node.

    _encodeText(text, encoding)

    source code 

    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was attributed to Martin v. Löwis and was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.

    _translateCDATAAttr(characters)

    source code 

    Handles normalization and some intelligence about quoting.

    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.

    _translateCDATA(characters, encoding='UTF-8', prev_chars='', markupSafe=0)

    source code 

    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.


    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.dvdwriter._ImageProperties-class.html0000664000175000017500000002172212642035644033460 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter._ImageProperties
    Package CedarBackup2 :: Package writers :: Module dvdwriter :: Class _ImageProperties
    [hide private]
    [frames] | no frames]

    Class _ImageProperties

    source code

    object --+
             |
            _ImageProperties
    

    Simple value object to hold image properties for DvdWriter.

    Instance Methods [hide private]
     
    __init__(self)
    x.__init__(...) initializes x; see help(type(x)) for signature
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    x.__init__(...) initializes x; see help(type(x)) for signature

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.filesystem-module.html0000664000175000017500000000401412642035643027634 0ustar pronovicpronovic00000000000000 filesystem

    Module filesystem


    Classes

    BackupFileList
    FilesystemList
    PurgeItemList
    SpanItem

    Functions

    compareContents
    compareDigestMaps
    normalizeDir

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.customize-module.html0000664000175000017500000002610012642035643026707 0ustar pronovicpronovic00000000000000 CedarBackup2.customize
    Package CedarBackup2 :: Module customize
    [hide private]
    [frames] | no frames]

    Module customize

    source code

    Implements customized behavior.

    Some behaviors need to vary when packaged for certain platforms. For instance, while Cedar Backup generally uses cdrecord and mkisofs, Debian ships compatible utilities called wodim and genisoimage. I want there to be one single place where Cedar Backup is patched for Debian, rather than having to maintain a variety of patches in different places.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    customizeOverrides(config, platform='standard')
    Modify command overrides based on the configured platform.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.customize")
      PLATFORM = 'standard'
      DEBIAN_CDRECORD = '/usr/bin/wodim'
      DEBIAN_MKISOFS = '/usr/bin/genisoimage'
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    customizeOverrides(config, platform='standard')

    source code 

    Modify command overrides based on the configured platform.

    On some platforms, we want to add command overrides to configuration. Each override will only be added if the configuration does not already contain an override with the same name. That way, the user still has a way to choose their own version of the command if they want.

    Parameters:
    • config - Configuration to modify
    • platform - Platform that is in use

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.validate-pysrc.html0000664000175000017500000032123312642035646030000 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.validate
    Package CedarBackup2 :: Package actions :: Module validate
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.validate

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Implements the standard 'validate' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'validate' action. 
     40  @sort: executeValidate 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import os 
     51  import logging 
     52   
     53  # Cedar Backup modules 
     54  from CedarBackup2.util import getUidGid, getFunctionReference 
     55  from CedarBackup2.actions.util import createWriter 
     56   
     57   
     58  ######################################################################## 
     59  # Module-wide constants and variables 
     60  ######################################################################## 
     61   
     62  logger = logging.getLogger("CedarBackup2.log.actions.validate") 
     63   
     64   
     65  ######################################################################## 
     66  # Public functions 
     67  ######################################################################## 
     68   
     69  ############################# 
     70  # executeValidate() function 
     71  ############################# 
     72   
    
    73 -def executeValidate(configPath, options, config):
    74 """ 75 Executes the validate action. 76 77 This action validates each of the individual sections in the config file. 78 This is a "runtime" validation. The config file itself is already valid in 79 a structural sense, so what we check here that is that we can actually use 80 the configuration without any problems. 81 82 There's a separate validation function for each of the configuration 83 sections. Each validation function returns a true/false indication for 84 whether configuration was valid, and then logs any configuration problems it 85 finds. This way, one pass over configuration indicates most or all of the 86 obvious problems, rather than finding just one problem at a time. 87 88 Any reported problems will be logged at the ERROR level normally, or at the 89 INFO level if the quiet flag is enabled. 90 91 @param configPath: Path to configuration file on disk. 92 @type configPath: String representing a path on disk. 93 94 @param options: Program command-line options. 95 @type options: Options object. 96 97 @param config: Program configuration. 98 @type config: Config object. 99 100 @raise ValueError: If some configuration value is invalid. 101 """ 102 logger.debug("Executing the 'validate' action.") 103 if options.quiet: 104 logfunc = logger.info # info so it goes to the log 105 else: 106 logfunc = logger.error # error so it goes to the screen 107 valid = True 108 valid &= _validateReference(config, logfunc) 109 valid &= _validateOptions(config, logfunc) 110 valid &= _validateCollect(config, logfunc) 111 valid &= _validateStage(config, logfunc) 112 valid &= _validateStore(config, logfunc) 113 valid &= _validatePurge(config, logfunc) 114 valid &= _validateExtensions(config, logfunc) 115 if valid: 116 logfunc("Configuration is valid.") 117 else: 118 logfunc("Configuration is not valid.")
    119 120 121 ######################################################################## 122 # Private utility functions 123 ######################################################################## 124 125 ####################### 126 # _checkDir() function 127 ####################### 128
    129 -def _checkDir(path, writable, logfunc, prefix):
    130 """ 131 Checks that the indicated directory is OK. 132 133 The path must exist, must be a directory, must be readable and executable, 134 and must optionally be writable. 135 136 @param path: Path to check. 137 @param writable: Check that path is writable. 138 @param logfunc: Function to use for logging errors. 139 @param prefix: Prefix to use on logged errors. 140 141 @return: True if the directory is OK, False otherwise. 142 """ 143 if not os.path.exists(path): 144 logfunc("%s [%s] does not exist." % (prefix, path)) 145 return False 146 if not os.path.isdir(path): 147 logfunc("%s [%s] is not a directory." % (prefix, path)) 148 return False 149 if not os.access(path, os.R_OK): 150 logfunc("%s [%s] is not readable." % (prefix, path)) 151 return False 152 if not os.access(path, os.X_OK): 153 logfunc("%s [%s] is not executable." % (prefix, path)) 154 return False 155 if writable and not os.access(path, os.W_OK): 156 logfunc("%s [%s] is not writable." % (prefix, path)) 157 return False 158 return True
    159 160 161 ################################ 162 # _validateReference() function 163 ################################ 164
    165 -def _validateReference(config, logfunc):
    166 """ 167 Execute runtime validations on reference configuration. 168 169 We only validate that reference configuration exists at all. 170 171 @param config: Program configuration. 172 @param logfunc: Function to use for logging errors 173 174 @return: True if configuration is valid, false otherwise. 175 """ 176 valid = True 177 if config.reference is None: 178 logfunc("Required reference configuration does not exist.") 179 valid = False 180 return valid
    181 182 183 ############################## 184 # _validateOptions() function 185 ############################## 186
    187 -def _validateOptions(config, logfunc):
    188 """ 189 Execute runtime validations on options configuration. 190 191 The following validations are enforced: 192 193 - The options section must exist 194 - The working directory must exist and must be writable 195 - The backup user and backup group must exist 196 197 @param config: Program configuration. 198 @param logfunc: Function to use for logging errors 199 200 @return: True if configuration is valid, false otherwise. 201 """ 202 valid = True 203 if config.options is None: 204 logfunc("Required options configuration does not exist.") 205 valid = False 206 else: 207 valid &= _checkDir(config.options.workingDir, True, logfunc, "Working directory") 208 try: 209 getUidGid(config.options.backupUser, config.options.backupGroup) 210 except ValueError: 211 logfunc("Backup user:group [%s:%s] invalid." % (config.options.backupUser, config.options.backupGroup)) 212 valid = False 213 return valid
    214 215 216 ############################## 217 # _validateCollect() function 218 ############################## 219
    220 -def _validateCollect(config, logfunc):
    221 """ 222 Execute runtime validations on collect configuration. 223 224 The following validations are enforced: 225 226 - The target directory must exist and must be writable 227 - Each of the individual collect directories must exist and must be readable 228 229 @param config: Program configuration. 230 @param logfunc: Function to use for logging errors 231 232 @return: True if configuration is valid, false otherwise. 233 """ 234 valid = True 235 if config.collect is not None: 236 valid &= _checkDir(config.collect.targetDir, True, logfunc, "Collect target directory") 237 if config.collect.collectDirs is not None: 238 for collectDir in config.collect.collectDirs: 239 valid &= _checkDir(collectDir.absolutePath, False, logfunc, "Collect directory") 240 return valid
    241 242 243 ############################ 244 # _validateStage() function 245 ############################ 246
    247 -def _validateStage(config, logfunc):
    248 """ 249 Execute runtime validations on stage configuration. 250 251 The following validations are enforced: 252 253 - The target directory must exist and must be writable 254 - Each local peer's collect directory must exist and must be readable 255 256 @note: We currently do not validate anything having to do with remote peers, 257 since we don't have a straightforward way of doing it. It would require 258 adding an rsh command rather than just an rcp command to configuration, and 259 that just doesn't seem worth it right now. 260 261 @param config: Program configuration. 262 @param logfunc: Function to use for logging errors 263 264 @return: True if configuration is valid, False otherwise. 265 """ 266 valid = True 267 if config.stage is not None: 268 valid &= _checkDir(config.stage.targetDir, True, logfunc, "Stage target dir ") 269 if config.stage.localPeers is not None: 270 for peer in config.stage.localPeers: 271 valid &= _checkDir(peer.collectDir, False, logfunc, "Local peer collect dir ") 272 return valid
    273 274 275 ############################ 276 # _validateStore() function 277 ############################ 278
    279 -def _validateStore(config, logfunc):
    280 """ 281 Execute runtime validations on store configuration. 282 283 The following validations are enforced: 284 285 - The source directory must exist and must be readable 286 - The backup device (path and SCSI device) must be valid 287 288 @param config: Program configuration. 289 @param logfunc: Function to use for logging errors 290 291 @return: True if configuration is valid, False otherwise. 292 """ 293 valid = True 294 if config.store is not None: 295 valid &= _checkDir(config.store.sourceDir, False, logfunc, "Store source directory") 296 try: 297 createWriter(config) 298 except ValueError: 299 logfunc("Backup device [%s] [%s] is not valid." % (config.store.devicePath, config.store.deviceScsiId)) 300 valid = False 301 return valid
    302 303 304 ############################ 305 # _validatePurge() function 306 ############################ 307
    308 -def _validatePurge(config, logfunc):
    309 """ 310 Execute runtime validations on purge configuration. 311 312 The following validations are enforced: 313 314 - Each purge directory must exist and must be writable 315 316 @param config: Program configuration. 317 @param logfunc: Function to use for logging errors 318 319 @return: True if configuration is valid, False otherwise. 320 """ 321 valid = True 322 if config.purge is not None: 323 if config.purge.purgeDirs is not None: 324 for purgeDir in config.purge.purgeDirs: 325 valid &= _checkDir(purgeDir.absolutePath, True, logfunc, "Purge directory") 326 return valid
    327 328 329 ################################# 330 # _validateExtensions() function 331 ################################# 332
    333 -def _validateExtensions(config, logfunc):
    334 """ 335 Execute runtime validations on extensions configuration. 336 337 The following validations are enforced: 338 339 - Each indicated extension function must exist. 340 341 @param config: Program configuration. 342 @param logfunc: Function to use for logging errors 343 344 @return: True if configuration is valid, False otherwise. 345 """ 346 valid = True 347 if config.extensions is not None: 348 if config.extensions.actions is not None: 349 for action in config.extensions.actions: 350 try: 351 getFunctionReference(action.module, action.function) 352 except ImportError: 353 logfunc("Unable to find function [%s.%s]." % (action.module, action.function)) 354 valid = False 355 except ValueError: 356 logfunc("Function [%s.%s] is not callable." % (action.module, action.function)) 357 valid = False 358 return valid
    359

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.amazons3-pysrc.html0000664000175000017500000113132712642035647027576 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.amazons3
    Package CedarBackup2 :: Package extend :: Module amazons3
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.amazons3

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2014-2015 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : "Store" type extension that writes data to Amazon S3. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Store-type extension that writes data to Amazon S3. 
     40   
     41  This extension requires a new configuration section <amazons3> and is intended 
     42  to be run immediately after the standard stage action, replacing the standard 
     43  store action.  Aside from its own configuration, it requires the options and 
     44  staging configuration sections in the standard Cedar Backup configuration file. 
     45  Since it is intended to replace the store action, it does not rely on any store 
     46  configuration. 
     47   
     48  The underlying functionality relies on the U{AWS CLI interface 
     49  <http://aws.amazon.com/documentation/cli/>}.  Before you use this extension, 
     50  you need to set up your Amazon S3 account and configure the AWS CLI connection 
     51  per Amazon's documentation.  The extension assumes that the backup is being 
     52  executed as root, and switches over to the configured backup user to 
     53  communicate with AWS.  So, make sure you configure AWS CLI as the backup user 
     54  and not root. 
     55   
     56  You can optionally configure Cedar Backup to encrypt data before sending it 
     57  to S3.  To do that, provide a complete command line using the C{${input}} and 
     58  C{${output}} variables to represent the original input file and the encrypted 
     59  output file.  This command will be executed as the backup user. 
     60   
     61  For instance, you can use something like this with GPG:: 
     62   
     63     /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input} 
     64   
     65  The GPG mechanism depends on a strong passphrase for security.  One way to 
     66  generate a strong passphrase is using your system random number generator, i.e.:: 
     67   
     68     dd if=/dev/urandom count=20 bs=1 | xxd -ps 
     69   
     70  (See U{StackExchange <http://security.stackexchange.com/questions/14867/gpg-encryption-security>} 
     71  for more details about that advice.) If you decide to use encryption, make sure 
     72  you save off the passphrase in a safe place, so you can get at your backup data 
     73  later if you need to.  And obviously, make sure to set permissions on the 
     74  passphrase file so it can only be read by the backup user. 
     75   
     76  This extension was written for and tested on Linux.  It will throw an exception 
     77  if run on Windows. 
     78   
     79  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     80  """ 
     81   
     82  ######################################################################## 
     83  # Imported modules 
     84  ######################################################################## 
     85   
     86  # System modules 
     87  import sys 
     88  import os 
     89  import logging 
     90  import tempfile 
     91  import datetime 
     92  import json 
     93  import shutil 
     94   
     95  # Cedar Backup modules 
     96  from CedarBackup2.filesystem import FilesystemList, BackupFileList 
     97  from CedarBackup2.util import resolveCommand, executeCommand, isRunningAsRoot, changeOwnership, isStartOfWeek 
     98  from CedarBackup2.util import displayBytes, UNIT_BYTES 
     99  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addBooleanNode, addStringNode 
    100  from CedarBackup2.xmlutil import readFirstChild, readString, readBoolean 
    101  from CedarBackup2.actions.util import writeIndicatorFile 
    102  from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR 
    103  from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode 
    104   
    105   
    106  ######################################################################## 
    107  # Module-wide constants and variables 
    108  ######################################################################## 
    109   
    110  logger = logging.getLogger("CedarBackup2.log.extend.amazons3") 
    111   
    112  SU_COMMAND    = [ "su" ] 
    113  AWS_COMMAND   = [ "aws" ] 
    114   
    115  STORE_INDICATOR = "cback.amazons3" 
    
    116 117 118 ######################################################################## 119 # AmazonS3Config class definition 120 ######################################################################## 121 122 -class AmazonS3Config(object):
    123 124 """ 125 Class representing Amazon S3 configuration. 126 127 Amazon S3 configuration is used for storing backup data in Amazon's S3 cloud 128 storage using the C{s3cmd} tool. 129 130 The following restrictions exist on data in this class: 131 132 - The s3Bucket value must be a non-empty string 133 - The encryptCommand value, if set, must be a non-empty string 134 - The full backup size limit, if set, must be a ByteQuantity >= 0 135 - The incremental backup size limit, if set, must be a ByteQuantity >= 0 136 137 @sort: __init__, __repr__, __str__, __cmp__, warnMidnite, s3Bucket 138 """ 139
    140 - def __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None, 141 fullBackupSizeLimit=None, incrementalBackupSizeLimit=None):
    142 """ 143 Constructor for the C{AmazonS3Config} class. 144 145 @param warnMidnite: Whether to generate warnings for crossing midnite. 146 @param s3Bucket: Name of the Amazon S3 bucket in which to store the data 147 @param encryptCommand: Command used to encrypt backup data before upload to S3 148 @param fullBackupSizeLimit: Maximum size of a full backup, a ByteQuantity 149 @param incrementalBackupSizeLimit: Maximum size of an incremental backup, a ByteQuantity 150 151 @raise ValueError: If one of the values is invalid. 152 """ 153 self._warnMidnite = None 154 self._s3Bucket = None 155 self._encryptCommand = None 156 self._fullBackupSizeLimit = None 157 self._incrementalBackupSizeLimit = None 158 self.warnMidnite = warnMidnite 159 self.s3Bucket = s3Bucket 160 self.encryptCommand = encryptCommand 161 self.fullBackupSizeLimit = fullBackupSizeLimit 162 self.incrementalBackupSizeLimit = incrementalBackupSizeLimit
    163
    164 - def __repr__(self):
    165 """ 166 Official string representation for class instance. 167 """ 168 return "AmazonS3Config(%s, %s, %s, %s, %s)" % (self.warnMidnite, self.s3Bucket, self.encryptCommand, 169 self.fullBackupSizeLimit, self.incrementalBackupSizeLimit)
    170
    171 - def __str__(self):
    172 """ 173 Informal string representation for class instance. 174 """ 175 return self.__repr__()
    176
    177 - def __cmp__(self, other):
    178 """ 179 Definition of equals operator for this class. 180 @param other: Other object to compare to. 181 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 182 """ 183 if other is None: 184 return 1 185 if self.warnMidnite != other.warnMidnite: 186 if self.warnMidnite < other.warnMidnite: 187 return -1 188 else: 189 return 1 190 if self.s3Bucket != other.s3Bucket: 191 if self.s3Bucket < other.s3Bucket: 192 return -1 193 else: 194 return 1 195 if self.encryptCommand != other.encryptCommand: 196 if self.encryptCommand < other.encryptCommand: 197 return -1 198 else: 199 return 1 200 if self.fullBackupSizeLimit != other.fullBackupSizeLimit: 201 if self.fullBackupSizeLimit < other.fullBackupSizeLimit: 202 return -1 203 else: 204 return 1 205 if self.incrementalBackupSizeLimit != other.incrementalBackupSizeLimit: 206 if self.incrementalBackupSizeLimit < other.incrementalBackupSizeLimit: 207 return -1 208 else: 209 return 1 210 return 0
    211
    212 - def _setWarnMidnite(self, value):
    213 """ 214 Property target used to set the midnite warning flag. 215 No validations, but we normalize the value to C{True} or C{False}. 216 """ 217 if value: 218 self._warnMidnite = True 219 else: 220 self._warnMidnite = False
    221
    222 - def _getWarnMidnite(self):
    223 """ 224 Property target used to get the midnite warning flag. 225 """ 226 return self._warnMidnite
    227
    228 - def _setS3Bucket(self, value):
    229 """ 230 Property target used to set the S3 bucket. 231 """ 232 if value is not None: 233 if len(value) < 1: 234 raise ValueError("S3 bucket must be non-empty string.") 235 self._s3Bucket = value
    236
    237 - def _getS3Bucket(self):
    238 """ 239 Property target used to get the S3 bucket. 240 """ 241 return self._s3Bucket
    242
    243 - def _setEncryptCommand(self, value):
    244 """ 245 Property target used to set the encrypt command. 246 """ 247 if value is not None: 248 if len(value) < 1: 249 raise ValueError("Encrypt command must be non-empty string.") 250 self._encryptCommand = value
    251
    252 - def _getEncryptCommand(self):
    253 """ 254 Property target used to get the encrypt command. 255 """ 256 return self._encryptCommand
    257
    258 - def _setFullBackupSizeLimit(self, value):
    259 """ 260 Property target used to set the full backup size limit. 261 The value must be an integer >= 0. 262 @raise ValueError: If the value is not valid. 263 """ 264 if value is None: 265 self._fullBackupSizeLimit = None 266 else: 267 if isinstance(value, ByteQuantity): 268 self._fullBackupSizeLimit = value 269 else: 270 self._fullBackupSizeLimit = ByteQuantity(value, UNIT_BYTES)
    271
    272 - def _getFullBackupSizeLimit(self):
    273 """ 274 Property target used to get the full backup size limit. 275 """ 276 return self._fullBackupSizeLimit
    277
    278 - def _setIncrementalBackupSizeLimit(self, value):
    279 """ 280 Property target used to set the incremental backup size limit. 281 The value must be an integer >= 0. 282 @raise ValueError: If the value is not valid. 283 """ 284 if value is None: 285 self._incrementalBackupSizeLimit = None 286 else: 287 if isinstance(value, ByteQuantity): 288 self._incrementalBackupSizeLimit = value 289 else: 290 self._incrementalBackupSizeLimit = ByteQuantity(value, UNIT_BYTES)
    291
    293 """ 294 Property target used to get the incremental backup size limit. 295 """ 296 return self._incrementalBackupSizeLimit
    297 298 warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") 299 s3Bucket = property(_getS3Bucket, _setS3Bucket, None, doc="Amazon S3 Bucket in which to store data") 300 encryptCommand = property(_getEncryptCommand, _setEncryptCommand, None, doc="Command used to encrypt data before upload to S3") 301 fullBackupSizeLimit = property(_getFullBackupSizeLimit, _setFullBackupSizeLimit, None, 302 doc="Maximum size of a full backup, as a ByteQuantity") 303 incrementalBackupSizeLimit = property(_getIncrementalBackupSizeLimit, _setIncrementalBackupSizeLimit, None, 304 doc="Maximum size of an incremental backup, as a ByteQuantity")
    305
    306 307 ######################################################################## 308 # LocalConfig class definition 309 ######################################################################## 310 311 -class LocalConfig(object):
    312 313 """ 314 Class representing this extension's configuration document. 315 316 This is not a general-purpose configuration object like the main Cedar 317 Backup configuration object. Instead, it just knows how to parse and emit 318 amazons3-specific configuration values. Third parties who need to read and 319 write configuration related to this extension should access it through the 320 constructor, C{validate} and C{addConfig} methods. 321 322 @note: Lists within this class are "unordered" for equality comparisons. 323 324 @sort: __init__, __repr__, __str__, __cmp__, amazons3, validate, addConfig 325 """ 326
    327 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    328 """ 329 Initializes a configuration object. 330 331 If you initialize the object without passing either C{xmlData} or 332 C{xmlPath} then configuration will be empty and will be invalid until it 333 is filled in properly. 334 335 No reference to the original XML data or original path is saved off by 336 this class. Once the data has been parsed (successfully or not) this 337 original information is discarded. 338 339 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 340 method will be called (with its default arguments) against configuration 341 after successfully parsing any passed-in XML. Keep in mind that even if 342 C{validate} is C{False}, it might not be possible to parse the passed-in 343 XML document if lower-level validations fail. 344 345 @note: It is strongly suggested that the C{validate} option always be set 346 to C{True} (the default) unless there is a specific need to read in 347 invalid configuration from disk. 348 349 @param xmlData: XML data representing configuration. 350 @type xmlData: String data. 351 352 @param xmlPath: Path to an XML file on disk. 353 @type xmlPath: Absolute path to a file on disk. 354 355 @param validate: Validate the document after parsing it. 356 @type validate: Boolean true/false. 357 358 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 359 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 360 @raise ValueError: If the parsed configuration document is not valid. 361 """ 362 self._amazons3 = None 363 self.amazons3 = None 364 if xmlData is not None and xmlPath is not None: 365 raise ValueError("Use either xmlData or xmlPath, but not both.") 366 if xmlData is not None: 367 self._parseXmlData(xmlData) 368 if validate: 369 self.validate() 370 elif xmlPath is not None: 371 xmlData = open(xmlPath).read() 372 self._parseXmlData(xmlData) 373 if validate: 374 self.validate()
    375
    376 - def __repr__(self):
    377 """ 378 Official string representation for class instance. 379 """ 380 return "LocalConfig(%s)" % (self.amazons3)
    381
    382 - def __str__(self):
    383 """ 384 Informal string representation for class instance. 385 """ 386 return self.__repr__()
    387
    388 - def __cmp__(self, other):
    389 """ 390 Definition of equals operator for this class. 391 Lists within this class are "unordered" for equality comparisons. 392 @param other: Other object to compare to. 393 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 394 """ 395 if other is None: 396 return 1 397 if self.amazons3 != other.amazons3: 398 if self.amazons3 < other.amazons3: 399 return -1 400 else: 401 return 1 402 return 0
    403
    404 - def _setAmazonS3(self, value):
    405 """ 406 Property target used to set the amazons3 configuration value. 407 If not C{None}, the value must be a C{AmazonS3Config} object. 408 @raise ValueError: If the value is not a C{AmazonS3Config} 409 """ 410 if value is None: 411 self._amazons3 = None 412 else: 413 if not isinstance(value, AmazonS3Config): 414 raise ValueError("Value must be a C{AmazonS3Config} object.") 415 self._amazons3 = value
    416
    417 - def _getAmazonS3(self):
    418 """ 419 Property target used to get the amazons3 configuration value. 420 """ 421 return self._amazons3
    422 423 amazons3 = property(_getAmazonS3, _setAmazonS3, None, "AmazonS3 configuration in terms of a C{AmazonS3Config} object.") 424
    425 - def validate(self):
    426 """ 427 Validates configuration represented by the object. 428 429 AmazonS3 configuration must be filled in. Within that, the s3Bucket target must be filled in 430 431 @raise ValueError: If one of the validations fails. 432 """ 433 if self.amazons3 is None: 434 raise ValueError("AmazonS3 section is required.") 435 if self.amazons3.s3Bucket is None: 436 raise ValueError("AmazonS3 s3Bucket must be set.")
    437
    438 - def addConfig(self, xmlDom, parentNode):
    439 """ 440 Adds an <amazons3> configuration section as the next child of a parent. 441 442 Third parties should use this function to write configuration related to 443 this extension. 444 445 We add the following fields to the document:: 446 447 warnMidnite //cb_config/amazons3/warn_midnite 448 s3Bucket //cb_config/amazons3/s3_bucket 449 encryptCommand //cb_config/amazons3/encrypt 450 fullBackupSizeLimit //cb_config/amazons3/full_size_limit 451 incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit 452 453 @param xmlDom: DOM tree as from C{impl.createDocument()}. 454 @param parentNode: Parent that the section should be appended to. 455 """ 456 if self.amazons3 is not None: 457 sectionNode = addContainerNode(xmlDom, parentNode, "amazons3") 458 addBooleanNode(xmlDom, sectionNode, "warn_midnite", self.amazons3.warnMidnite) 459 addStringNode(xmlDom, sectionNode, "s3_bucket", self.amazons3.s3Bucket) 460 addStringNode(xmlDom, sectionNode, "encrypt", self.amazons3.encryptCommand) 461 addByteQuantityNode(xmlDom, sectionNode, "full_size_limit", self.amazons3.fullBackupSizeLimit) 462 addByteQuantityNode(xmlDom, sectionNode, "incr_size_limit", self.amazons3.incrementalBackupSizeLimit)
    463
    464 - def _parseXmlData(self, xmlData):
    465 """ 466 Internal method to parse an XML string into the object. 467 468 This method parses the XML document into a DOM tree (C{xmlDom}) and then 469 calls a static method to parse the amazons3 configuration section. 470 471 @param xmlData: XML data to be parsed 472 @type xmlData: String data 473 474 @raise ValueError: If the XML cannot be successfully parsed. 475 """ 476 (xmlDom, parentNode) = createInputDom(xmlData) 477 self._amazons3 = LocalConfig._parseAmazonS3(parentNode)
    478 479 @staticmethod
    480 - def _parseAmazonS3(parent):
    481 """ 482 Parses an amazons3 configuration section. 483 484 We read the following individual fields:: 485 486 warnMidnite //cb_config/amazons3/warn_midnite 487 s3Bucket //cb_config/amazons3/s3_bucket 488 encryptCommand //cb_config/amazons3/encrypt 489 fullBackupSizeLimit //cb_config/amazons3/full_size_limit 490 incrementalBackupSizeLimit //cb_config/amazons3/incr_size_limit 491 492 @param parent: Parent node to search beneath. 493 494 @return: C{AmazonS3Config} object or C{None} if the section does not exist. 495 @raise ValueError: If some filled-in value is invalid. 496 """ 497 amazons3 = None 498 section = readFirstChild(parent, "amazons3") 499 if section is not None: 500 amazons3 = AmazonS3Config() 501 amazons3.warnMidnite = readBoolean(section, "warn_midnite") 502 amazons3.s3Bucket = readString(section, "s3_bucket") 503 amazons3.encryptCommand = readString(section, "encrypt") 504 amazons3.fullBackupSizeLimit = readByteQuantity(section, "full_size_limit") 505 amazons3.incrementalBackupSizeLimit = readByteQuantity(section, "incr_size_limit") 506 return amazons3
    507
    508 509 ######################################################################## 510 # Public functions 511 ######################################################################## 512 513 ########################### 514 # executeAction() function 515 ########################### 516 517 -def executeAction(configPath, options, config):
    518 """ 519 Executes the amazons3 backup action. 520 521 @param configPath: Path to configuration file on disk. 522 @type configPath: String representing a path on disk. 523 524 @param options: Program command-line options. 525 @type options: Options object. 526 527 @param config: Program configuration. 528 @type config: Config object. 529 530 @raise ValueError: Under many generic error conditions 531 @raise IOError: If there are I/O problems reading or writing files 532 """ 533 logger.debug("Executing amazons3 extended action.") 534 if not isRunningAsRoot(): 535 logger.error("Error: the amazons3 extended action must be run as root.") 536 raise ValueError("The amazons3 extended action must be run as root.") 537 if sys.platform == "win32": 538 logger.error("Error: the amazons3 extended action is not supported on Windows.") 539 raise ValueError("The amazons3 extended action is not supported on Windows.") 540 if config.options is None or config.stage is None: 541 raise ValueError("Cedar Backup configuration is not properly filled in.") 542 local = LocalConfig(xmlPath=configPath) 543 stagingDirs = _findCorrectDailyDir(options, config, local) 544 _applySizeLimits(options, config, local, stagingDirs) 545 _writeToAmazonS3(config, local, stagingDirs) 546 _writeStoreIndicator(config, stagingDirs) 547 logger.info("Executed the amazons3 extended action successfully.")
    548
    549 550 ######################################################################## 551 # Private utility functions 552 ######################################################################## 553 554 ######################### 555 # _findCorrectDailyDir() 556 ######################### 557 558 -def _findCorrectDailyDir(options, config, local):
    559 """ 560 Finds the correct daily staging directory to be written to Amazon S3. 561 562 This is substantially similar to the same function in store.py. The 563 main difference is that it doesn't rely on store configuration at all. 564 565 @param options: Options object. 566 @param config: Config object. 567 @param local: Local config object. 568 569 @return: Correct staging dir, as a dict mapping directory to date suffix. 570 @raise IOError: If the staging directory cannot be found. 571 """ 572 oneDay = datetime.timedelta(days=1) 573 today = datetime.date.today() 574 yesterday = today - oneDay 575 tomorrow = today + oneDay 576 todayDate = today.strftime(DIR_TIME_FORMAT) 577 yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) 578 tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) 579 todayPath = os.path.join(config.stage.targetDir, todayDate) 580 yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) 581 tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) 582 todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) 583 yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) 584 tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) 585 todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) 586 yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) 587 tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) 588 if options.full: 589 if os.path.isdir(todayPath) and os.path.exists(todayStageInd): 590 logger.info("Amazon S3 process will use current day's staging directory [%s]", todayPath) 591 return { todayPath:todayDate } 592 raise IOError("Unable to find staging directory to process (only tried today due to full option).") 593 else: 594 if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): 595 logger.info("Amazon S3 process will use current day's staging directory [%s]", todayPath) 596 return { todayPath:todayDate } 597 elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): 598 logger.info("Amazon S3 process will use previous day's staging directory [%s]", yesterdayPath) 599 if local.amazons3.warnMidnite: 600 logger.warn("Warning: Amazon S3 process crossed midnite boundary to find data.") 601 return { yesterdayPath:yesterdayDate } 602 elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): 603 logger.info("Amazon S3 process will use next day's staging directory [%s]", tomorrowPath) 604 if local.amazons3.warnMidnite: 605 logger.warn("Warning: Amazon S3 process crossed midnite boundary to find data.") 606 return { tomorrowPath:tomorrowDate } 607 raise IOError("Unable to find unused staging directory to process (tried today, yesterday, tomorrow).")
    608
    609 610 ############################## 611 # _applySizeLimits() function 612 ############################## 613 614 -def _applySizeLimits(options, config, local, stagingDirs):
    615 """ 616 Apply size limits, throwing an exception if any limits are exceeded. 617 618 Size limits are optional. If a limit is set to None, it does not apply. 619 The full size limit applies if the full option is set or if today is the 620 start of the week. The incremental size limit applies otherwise. Limits 621 are applied to the total size of all the relevant staging directories. 622 623 @param options: Options object. 624 @param config: Config object. 625 @param local: Local config object. 626 @param stagingDirs: Dictionary mapping directory path to date suffix. 627 628 @raise ValueError: Under many generic error conditions 629 @raise ValueError: If a size limit has been exceeded 630 """ 631 if options.full or isStartOfWeek(config.options.startingDay): 632 logger.debug("Using Amazon S3 size limit for full backups.") 633 limit = local.amazons3.fullBackupSizeLimit 634 else: 635 logger.debug("Using Amazon S3 size limit for incremental backups.") 636 limit = local.amazons3.incrementalBackupSizeLimit 637 if limit is None: 638 logger.debug("No Amazon S3 size limit will be applied.") 639 else: 640 logger.debug("Amazon S3 size limit is: %s", limit) 641 contents = BackupFileList() 642 for stagingDir in stagingDirs: 643 contents.addDirContents(stagingDir) 644 total = contents.totalSize() 645 logger.debug("Amazon S3 backup size is: %s", displayBytes(total)) 646 if total > limit.bytes: 647 logger.error("Amazon S3 size limit exceeded: %s > %s", displayBytes(total), limit) 648 raise ValueError("Amazon S3 size limit exceeded: %s > %s" % (displayBytes(total), limit)) 649 else: 650 logger.info("Total size does not exceed Amazon S3 size limit, so backup can continue.")
    651
    652 653 ############################## 654 # _writeToAmazonS3() function 655 ############################## 656 657 -def _writeToAmazonS3(config, local, stagingDirs):
    658 """ 659 Writes the indicated staging directories to an Amazon S3 bucket. 660 661 Each of the staging directories listed in C{stagingDirs} will be written to 662 the configured Amazon S3 bucket from local configuration. The directories 663 will be placed into the image at the root by date, so staging directory 664 C{/opt/stage/2005/02/10} will be placed into the S3 bucket at C{/2005/02/10}. 665 If an encrypt commmand is provided, the files will be encrypted first. 666 667 @param config: Config object. 668 @param local: Local config object. 669 @param stagingDirs: Dictionary mapping directory path to date suffix. 670 671 @raise ValueError: Under many generic error conditions 672 @raise IOError: If there is a problem writing to Amazon S3 673 """ 674 for stagingDir in stagingDirs.keys(): 675 logger.debug("Storing stage directory to Amazon S3 [%s].", stagingDir) 676 dateSuffix = stagingDirs[stagingDir] 677 s3BucketUrl = "s3://%s/%s" % (local.amazons3.s3Bucket, dateSuffix) 678 logger.debug("S3 bucket URL is [%s]", s3BucketUrl) 679 _clearExistingBackup(config, s3BucketUrl) 680 if local.amazons3.encryptCommand is None: 681 logger.debug("Encryption is disabled; files will be uploaded in cleartext.") 682 _uploadStagingDir(config, stagingDir, s3BucketUrl) 683 _verifyUpload(config, stagingDir, s3BucketUrl) 684 else: 685 logger.debug("Encryption is enabled; files will be uploaded after being encrypted.") 686 encryptedDir = tempfile.mkdtemp(dir=config.options.workingDir) 687 changeOwnership(encryptedDir, config.options.backupUser, config.options.backupGroup) 688 try: 689 _encryptStagingDir(config, local, stagingDir, encryptedDir) 690 _uploadStagingDir(config, encryptedDir, s3BucketUrl) 691 _verifyUpload(config, encryptedDir, s3BucketUrl) 692 finally: 693 if os.path.exists(encryptedDir): 694 shutil.rmtree(encryptedDir)
    695
    696 697 ################################## 698 # _writeStoreIndicator() function 699 ################################## 700 701 -def _writeStoreIndicator(config, stagingDirs):
    702 """ 703 Writes a store indicator file into staging directories. 704 @param config: Config object. 705 @param stagingDirs: Dictionary mapping directory path to date suffix. 706 """ 707 for stagingDir in stagingDirs.keys(): 708 writeIndicatorFile(stagingDir, STORE_INDICATOR, 709 config.options.backupUser, 710 config.options.backupGroup)
    711
    712 713 ################################## 714 # _clearExistingBackup() function 715 ################################## 716 717 -def _clearExistingBackup(config, s3BucketUrl):
    718 """ 719 Clear any existing backup files for an S3 bucket URL. 720 @param config: Config object. 721 @param s3BucketUrl: S3 bucket URL associated with the staging directory 722 """ 723 suCommand = resolveCommand(SU_COMMAND) 724 awsCommand = resolveCommand(AWS_COMMAND) 725 actualCommand = "%s s3 rm --recursive %s/" % (awsCommand[0], s3BucketUrl) 726 result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] 727 if result != 0: 728 raise IOError("Error [%d] calling AWS CLI to clear existing backup for [%s]." % (result, s3BucketUrl)) 729 logger.debug("Completed clearing any existing backup in S3 for [%s]", s3BucketUrl)
    730
    731 732 ############################### 733 # _uploadStagingDir() function 734 ############################### 735 736 -def _uploadStagingDir(config, stagingDir, s3BucketUrl):
    737 """ 738 Upload the contents of a staging directory out to the Amazon S3 cloud. 739 @param config: Config object. 740 @param stagingDir: Staging directory to upload 741 @param s3BucketUrl: S3 bucket URL associated with the staging directory 742 """ 743 suCommand = resolveCommand(SU_COMMAND) 744 awsCommand = resolveCommand(AWS_COMMAND) 745 actualCommand = "%s s3 cp --recursive %s/ %s/" % (awsCommand[0], stagingDir, s3BucketUrl) 746 result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] 747 if result != 0: 748 raise IOError("Error [%d] calling AWS CLI to upload staging directory to [%s]." % (result, s3BucketUrl)) 749 logger.debug("Completed uploading staging dir [%s] to [%s]", stagingDir, s3BucketUrl)
    750
    751 752 ########################### 753 # _verifyUpload() function 754 ########################### 755 756 -def _verifyUpload(config, stagingDir, s3BucketUrl):
    757 """ 758 Verify that a staging directory was properly uploaded to the Amazon S3 cloud. 759 @param config: Config object. 760 @param stagingDir: Staging directory to verify 761 @param s3BucketUrl: S3 bucket URL associated with the staging directory 762 """ 763 (bucket, prefix) = s3BucketUrl.replace("s3://", "").split("/", 1) 764 suCommand = resolveCommand(SU_COMMAND) 765 awsCommand = resolveCommand(AWS_COMMAND) 766 query = "Contents[].{Key: Key, Size: Size}" 767 actualCommand = "%s s3api list-objects --bucket %s --prefix %s --query '%s'" % (awsCommand[0], bucket, prefix, query) 768 (result, data) = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand], returnOutput=True) 769 if result != 0: 770 raise IOError("Error [%d] calling AWS CLI verify upload to [%s]." % (result, s3BucketUrl)) 771 contents = { } 772 for entry in json.loads("".join(data)): 773 key = entry["Key"].replace(prefix, "") 774 size = long(entry["Size"]) 775 contents[key] = size 776 files = FilesystemList() 777 files.addDirContents(stagingDir) 778 for entry in files: 779 if os.path.isfile(entry): 780 key = entry.replace(stagingDir, "") 781 size = long(os.stat(entry).st_size) 782 if not key in contents: 783 raise IOError("File was apparently not uploaded: [%s]" % entry) 784 else: 785 if size != contents[key]: 786 raise IOError("File size differs [%s], expected %s bytes but got %s bytes" % (entry, size, contents[key])) 787 logger.debug("Completed verifying upload from [%s] to [%s].", stagingDir, s3BucketUrl)
    788
    789 790 ################################ 791 # _encryptStagingDir() function 792 ################################ 793 794 -def _encryptStagingDir(config, local, stagingDir, encryptedDir):
    795 """ 796 Encrypt a staging directory, creating a new directory in the process. 797 @param config: Config object. 798 @param stagingDir: Staging directory to use as source 799 @param encryptedDir: Target directory into which encrypted files should be written 800 """ 801 suCommand = resolveCommand(SU_COMMAND) 802 files = FilesystemList() 803 files.addDirContents(stagingDir) 804 for cleartext in files: 805 if os.path.isfile(cleartext): 806 encrypted = "%s%s" % (encryptedDir, cleartext.replace(stagingDir, "")) 807 if long(os.stat(cleartext).st_size) == 0: 808 open(encrypted, 'a').close() # don't bother encrypting empty files 809 else: 810 actualCommand = local.amazons3.encryptCommand.replace("${input}", cleartext).replace("${output}", encrypted) 811 subdir = os.path.dirname(encrypted) 812 if not os.path.isdir(subdir): 813 os.makedirs(subdir) 814 changeOwnership(subdir, config.options.backupUser, config.options.backupGroup) 815 result = executeCommand(suCommand, [config.options.backupUser, "-c", actualCommand])[0] 816 if result != 0: 817 raise IOError("Error [%d] encrypting [%s]." % (result, cleartext)) 818 logger.debug("Completed encrypting staging directory [%s] into [%s]", stagingDir, encryptedDir)
    819

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.RegexMatchList-class.html0000664000175000017500000004522012642035644030351 0ustar pronovicpronovic00000000000000 CedarBackup2.util.RegexMatchList
    Package CedarBackup2 :: Module util :: Class RegexMatchList
    [hide private]
    [frames] | no frames]

    Class RegexMatchList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    RegexMatchList
    

    Class representing a list containing only strings that match a regular expression.

    If emptyAllowed is passed in as False, then empty strings are explicitly disallowed, even if they happen to match the regular expression. (None values are always disallowed, since string operations are not permitted on None.)

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list matches the indicated regular expression.


    Note: If you try to put values that are not strings into the list, you will likely get either TypeError or AttributeError exceptions as a result.

    Instance Methods [hide private]
    new empty list
    __init__(self, valuesRegex, emptyAllowed=True, prefix=None)
    Initializes a list restricted to containing certain values.
    source code
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, valuesRegex, emptyAllowed=True, prefix=None)
    (Constructor)

    source code 

    Initializes a list restricted to containing certain values.

    Parameters:
    • valuesRegex - Regular expression that must be matched, as a string
    • emptyAllowed - Indicates whether empty or None values are allowed.
    • prefix - Prefix to use in error messages (None results in prefix "Item")
    Returns: new empty list
    Overrides: object.__init__

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is None
    • ValueError - If item is empty and empty values are not allowed
    • ValueError - If item does not match the configured regular expression
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is None
    • ValueError - If item is empty and empty values are not allowed
    • ValueError - If item does not match the configured regular expression
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If any item is None
    • ValueError - If any item is empty and empty values are not allowed
    • ValueError - If any item does not match the configured regular expression
    Overrides: list.extend

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.Config-class.html0000664000175000017500000055601512642035644027174 0ustar pronovicpronovic00000000000000 CedarBackup2.config.Config
    Package CedarBackup2 :: Module config :: Class Config
    [hide private]
    [frames] | no frames]

    Class Config

    source code

    object --+
             |
            Config
    

    Class representing a Cedar Backup XML configuration document.

    The Config class is a Python object representation of a Cedar Backup XML configuration file. It is intended to be the only Python-language interface to Cedar Backup configuration on disk for both Cedar Backup itself and for external applications.

    The object representation is two-way: XML data can be used to create a Config object, and then changes to the object can be propogated back to disk. A Config object can even be used to create a configuration file from scratch programmatically.

    This class and the classes it is composed from often use Python's property construct to validate input and limit access to values. Some validations can only be done once a document is considered "complete" (see module notes for more details).

    Assignments to the various instance variables must match the expected type, i.e. reference must be a ReferenceConfig. The internal check uses the built-in isinstance function, so it should be OK to use subclasses if you want to.

    If an instance variable is not set, its value will be None. When an object is initialized without using an XML document, all of the values will be None. Even when an object is initialized using XML, some of the values might be None because not every section is required.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    extractXml(self, xmlPath=None, validate=True)
    Extracts configuration into an XML document.
    source code
     
    validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False)
    Validates configuration represented by the object.
    source code
     
    _getReference(self)
    Property target used to get the reference configuration value.
    source code
     
    _setReference(self, value)
    Property target used to set the reference configuration value.
    source code
     
    _getExtensions(self)
    Property target used to get the extensions configuration value.
    source code
     
    _setExtensions(self, value)
    Property target used to set the extensions configuration value.
    source code
     
    _getOptions(self)
    Property target used to get the options configuration value.
    source code
     
    _setOptions(self, value)
    Property target used to set the options configuration value.
    source code
     
    _getPeers(self)
    Property target used to get the peers configuration value.
    source code
     
    _setPeers(self, value)
    Property target used to set the peers configuration value.
    source code
     
    _getCollect(self)
    Property target used to get the collect configuration value.
    source code
     
    _setCollect(self, value)
    Property target used to set the collect configuration value.
    source code
     
    _getStage(self)
    Property target used to get the stage configuration value.
    source code
     
    _setStage(self, value)
    Property target used to set the stage configuration value.
    source code
     
    _getStore(self)
    Property target used to get the store configuration value.
    source code
     
    _setStore(self, value)
    Property target used to set the store configuration value.
    source code
     
    _getPurge(self)
    Property target used to get the purge configuration value.
    source code
     
    _setPurge(self, value)
    Property target used to set the purge configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code
     
    _extractXml(self)
    Internal method to extract configuration into an XML string.
    source code
     
    _validateContents(self)
    Validates configuration contents per rules discussed in module documentation.
    source code
     
    _validateReference(self)
    Validates reference configuration.
    source code
     
    _validateExtensions(self)
    Validates extensions configuration.
    source code
     
    _validateOptions(self)
    Validates options configuration.
    source code
     
    _validatePeers(self)
    Validates peers configuration per rules in _validatePeerList.
    source code
     
    _validateCollect(self)
    Validates collect configuration.
    source code
     
    _validateStage(self)
    Validates stage configuration.
    source code
     
    _validateStore(self)
    Validates store configuration.
    source code
     
    _validatePurge(self)
    Validates purge configuration.
    source code
     
    _validatePeerList(self, localPeers, remotePeers)
    Validates the set of local and remote peers.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseReference(parentNode)
    Parses a reference configuration section.
    source code
     
    _parseExtensions(parentNode)
    Parses an extensions configuration section.
    source code
     
    _parseOptions(parentNode)
    Parses a options configuration section.
    source code
     
    _parsePeers(parentNode)
    Parses a peers configuration section.
    source code
     
    _parseCollect(parentNode)
    Parses a collect configuration section.
    source code
     
    _parseStage(parentNode)
    Parses a stage configuration section.
    source code
     
    _parseStore(parentNode)
    Parses a store configuration section.
    source code
     
    _parsePurge(parentNode)
    Parses a purge configuration section.
    source code
     
    _parseExtendedActions(parentNode)
    Reads extended actions data from immediately beneath the parent.
    source code
     
    _parseExclusions(parentNode)
    Reads exclusions data from immediately beneath the parent.
    source code
     
    _parseOverrides(parentNode)
    Reads a list of CommandOverride objects from immediately beneath the parent.
    source code
     
    _parseHooks(parentNode)
    Reads a list of ActionHook objects from immediately beneath the parent.
    source code
     
    _parseCollectFiles(parentNode)
    Reads a list of CollectFile objects from immediately beneath the parent.
    source code
     
    _parseCollectDirs(parentNode)
    Reads a list of CollectDir objects from immediately beneath the parent.
    source code
     
    _parsePurgeDirs(parentNode)
    Reads a list of PurgeDir objects from immediately beneath the parent.
    source code
     
    _parsePeerList(parentNode)
    Reads remote and local peer data from immediately beneath the parent.
    source code
     
    _parseDependencies(parentNode)
    Reads extended action dependency information from a parent node.
    source code
     
    _parseBlankBehavior(parentNode)
    Reads a single BlankBehavior object from immediately beneath the parent.
    source code
     
    _addReference(xmlDom, parentNode, referenceConfig)
    Adds a <reference> configuration section as the next child of a parent.
    source code
     
    _addExtensions(xmlDom, parentNode, extensionsConfig)
    Adds an <extensions> configuration section as the next child of a parent.
    source code
     
    _addOptions(xmlDom, parentNode, optionsConfig)
    Adds a <options> configuration section as the next child of a parent.
    source code
     
    _addPeers(xmlDom, parentNode, peersConfig)
    Adds a <peers> configuration section as the next child of a parent.
    source code
     
    _addCollect(xmlDom, parentNode, collectConfig)
    Adds a <collect> configuration section as the next child of a parent.
    source code
     
    _addStage(xmlDom, parentNode, stageConfig)
    Adds a <stage> configuration section as the next child of a parent.
    source code
     
    _addStore(xmlDom, parentNode, storeConfig)
    Adds a <store> configuration section as the next child of a parent.
    source code
     
    _addPurge(xmlDom, parentNode, purgeConfig)
    Adds a <purge> configuration section as the next child of a parent.
    source code
     
    _addExtendedAction(xmlDom, parentNode, action)
    Adds an extended action container as the next child of a parent.
    source code
     
    _addOverride(xmlDom, parentNode, override)
    Adds a command override container as the next child of a parent.
    source code
     
    _addHook(xmlDom, parentNode, hook)
    Adds an action hook container as the next child of a parent.
    source code
     
    _addCollectFile(xmlDom, parentNode, collectFile)
    Adds a collect file container as the next child of a parent.
    source code
     
    _addCollectDir(xmlDom, parentNode, collectDir)
    Adds a collect directory container as the next child of a parent.
    source code
     
    _addLocalPeer(xmlDom, parentNode, localPeer)
    Adds a local peer container as the next child of a parent.
    source code
     
    _addRemotePeer(xmlDom, parentNode, remotePeer)
    Adds a remote peer container as the next child of a parent.
    source code
     
    _addPurgeDir(xmlDom, parentNode, purgeDir)
    Adds a purge directory container as the next child of a parent.
    source code
     
    _addDependencies(xmlDom, parentNode, dependencies)
    Adds a extended action dependencies to parent node.
    source code
     
    _buildCommaSeparatedString(valueList)
    Creates a comma-separated string from a list of values.
    source code
     
    _addBlankBehavior(xmlDom, parentNode, blankBehavior)
    Adds a blanking behavior container as the next child of a parent.
    source code
    Properties [hide private]
      reference
    Reference configuration in terms of a ReferenceConfig object.
      extensions
    Extensions configuration in terms of a ExtensionsConfig object.
      options
    Options configuration in terms of a OptionsConfig object.
      collect
    Collect configuration in terms of a CollectConfig object.
      stage
    Stage configuration in terms of a StageConfig object.
      store
    Store configuration in terms of a StoreConfig object.
      purge
    Purge configuration in terms of a PurgeConfig object.
      peers
    Peers configuration in terms of a PeersConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath, then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the Config.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    extractXml(self, xmlPath=None, validate=True)

    source code 

    Extracts configuration into an XML document.

    If xmlPath is not provided, then the XML document will be returned as a string. If xmlPath is provided, then the XML document will be written to the file and None will be returned.

    Unless the validate parameter is False, the Config.validate method will be called (with its default arguments) against the configuration before extracting the XML. If configuration is not valid, then an XML document will not be extracted.

    Parameters:
    • xmlPath (Absolute path to a file.) - Path to an XML file to create on disk.
    • validate (Boolean true/false.) - Validate the document before extracting it.
    Returns:
    XML string data or None as described above.
    Raises:
    • ValueError - If configuration within the object is not valid.
    • IOError - If there is an error writing to the file.
    • OSError - If there is an error writing to the file.

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to write an invalid configuration file to disk.

    validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False)

    source code 

    Validates configuration represented by the object.

    This method encapsulates all of the validations that should apply to a fully "complete" document but are not already taken care of by earlier validations. It also provides some extra convenience functionality which might be useful to some people. The process of validation is laid out in the Validation section in the class notes (above).

    Parameters:
    • requireOneAction - Require at least one of the collect, stage, store or purge sections.
    • requireReference - Require the reference section.
    • requireExtensions - Require the extensions section.
    • requireOptions - Require the options section.
    • requirePeers - Require the peers section.
    • requireCollect - Require the collect section.
    • requireStage - Require the stage section.
    • requireStore - Require the store section.
    • requirePurge - Require the purge section.
    Raises:
    • ValueError - If one of the validations fails.

    _setReference(self, value)

    source code 

    Property target used to set the reference configuration value. If not None, the value must be a ReferenceConfig object.

    Raises:
    • ValueError - If the value is not a ReferenceConfig

    _setExtensions(self, value)

    source code 

    Property target used to set the extensions configuration value. If not None, the value must be a ExtensionsConfig object.

    Raises:
    • ValueError - If the value is not a ExtensionsConfig

    _setOptions(self, value)

    source code 

    Property target used to set the options configuration value. If not None, the value must be an OptionsConfig object.

    Raises:
    • ValueError - If the value is not a OptionsConfig

    _setPeers(self, value)

    source code 

    Property target used to set the peers configuration value. If not None, the value must be an PeersConfig object.

    Raises:
    • ValueError - If the value is not a PeersConfig

    _setCollect(self, value)

    source code 

    Property target used to set the collect configuration value. If not None, the value must be a CollectConfig object.

    Raises:
    • ValueError - If the value is not a CollectConfig

    _setStage(self, value)

    source code 

    Property target used to set the stage configuration value. If not None, the value must be a StageConfig object.

    Raises:
    • ValueError - If the value is not a StageConfig

    _setStore(self, value)

    source code 

    Property target used to set the store configuration value. If not None, the value must be a StoreConfig object.

    Raises:
    • ValueError - If the value is not a StoreConfig

    _setPurge(self, value)

    source code 

    Property target used to set the purge configuration value. If not None, the value must be a PurgeConfig object.

    Raises:
    • ValueError - If the value is not a PurgeConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls individual static methods to parse each of the individual configuration sections.

    Most of the validation we do here has to do with whether the document can be parsed and whether any values which exist are valid. We don't do much validation as to whether required elements actually exist unless we have to to make sense of the document (instead, that's the job of the validate method).

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseReference(parentNode)
    Static Method

    source code 

    Parses a reference configuration section.

    We read the following fields:

      author         //cb_config/reference/author
      revision       //cb_config/reference/revision
      description    //cb_config/reference/description
      generator      //cb_config/reference/generator
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    ReferenceConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExtensions(parentNode)
    Static Method

    source code 

    Parses an extensions configuration section.

    We read the following fields:

      orderMode            //cb_config/extensions/order_mode
    

    We also read groups of the following items, one list element per item:

      name                 //cb_config/extensions/action/name
      module               //cb_config/extensions/action/module
      function             //cb_config/extensions/action/function
      index                //cb_config/extensions/action/index
      dependencies         //cb_config/extensions/action/depends
    

    The extended actions are parsed by _parseExtendedActions.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    ExtensionsConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseOptions(parentNode)
    Static Method

    source code 

    Parses a options configuration section.

    We read the following fields:

      startingDay    //cb_config/options/starting_day
      workingDir     //cb_config/options/working_dir
      backupUser     //cb_config/options/backup_user
      backupGroup    //cb_config/options/backup_group
      rcpCommand     //cb_config/options/rcp_command
      rshCommand     //cb_config/options/rsh_command
      cbackCommand   //cb_config/options/cback_command
      managedActions //cb_config/options/managed_actions
    

    The list of managed actions is a comma-separated list of action names.

    We also read groups of the following items, one list element per item:

      overrides      //cb_config/options/override
      hooks          //cb_config/options/hook
    

    The overrides are parsed by _parseOverrides and the hooks are parsed by _parseHooks.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    OptionsConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parsePeers(parentNode)
    Static Method

    source code 

    Parses a peers configuration section.

    We read groups of the following items, one list element per item:

      localPeers     //cb_config/stage/peer
      remotePeers    //cb_config/stage/peer
    

    The individual peer entries are parsed by _parsePeerList.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    StageConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseCollect(parentNode)
    Static Method

    source code 

    Parses a collect configuration section.

    We read the following individual fields:

      targetDir            //cb_config/collect/collect_dir
      collectMode          //cb_config/collect/collect_mode
      archiveMode          //cb_config/collect/archive_mode
      ignoreFile           //cb_config/collect/ignore_file
    

    We also read groups of the following items, one list element per item:

      absoluteExcludePaths //cb_config/collect/exclude/abs_path
      excludePatterns      //cb_config/collect/exclude/pattern
      collectFiles         //cb_config/collect/file
      collectDirs          //cb_config/collect/dir
    

    The exclusions are parsed by _parseExclusions, the collect files are parsed by _parseCollectFiles, and the directories are parsed by _parseCollectDirs.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    CollectConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseStage(parentNode)
    Static Method

    source code 

    Parses a stage configuration section.

    We read the following individual fields:

      targetDir      //cb_config/stage/staging_dir
    

    We also read groups of the following items, one list element per item:

      localPeers     //cb_config/stage/peer
      remotePeers    //cb_config/stage/peer
    

    The individual peer entries are parsed by _parsePeerList.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    StageConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseStore(parentNode)
    Static Method

    source code 

    Parses a store configuration section.

    We read the following fields:

      sourceDir         //cb_config/store/source_dir
      mediaType         //cb_config/store/media_type
      deviceType        //cb_config/store/device_type
      devicePath        //cb_config/store/target_device
      deviceScsiId      //cb_config/store/target_scsi_id
      driveSpeed        //cb_config/store/drive_speed
      checkData         //cb_config/store/check_data
      checkMedia        //cb_config/store/check_media
      warnMidnite       //cb_config/store/warn_midnite
      noEject           //cb_config/store/no_eject
    

    Blanking behavior configuration is parsed by the _parseBlankBehavior method.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    StoreConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parsePurge(parentNode)
    Static Method

    source code 

    Parses a purge configuration section.

    We read groups of the following items, one list element per item:

      purgeDirs     //cb_config/purge/dir
    

    The individual directory entries are parsed by _parsePurgeDirs.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    PurgeConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseExtendedActions(parentNode)
    Static Method

    source code 

    Reads extended actions data from immediately beneath the parent.

    We read the following individual fields from each extended action:

      name           name
      module         module
      function       function
      index          index
      dependencies   depends
    

    Dependency information is parsed by the _parseDependencies method.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of extended actions.
    Raises:
    • ValueError - If the data at the location can't be read

    _parseExclusions(parentNode)
    Static Method

    source code 

    Reads exclusions data from immediately beneath the parent.

    We read groups of the following items, one list element per item:

      absolute    exclude/abs_path
      relative    exclude/rel_path
      patterns    exclude/pattern
    

    If there are none of some pattern (i.e. no relative path items) then None will be returned for that item in the tuple.

    This method can be used to parse exclusions on both the collect configuration level and on the collect directory level within collect configuration.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (absolute, relative, patterns) exclusions.

    _parseOverrides(parentNode)
    Static Method

    source code 

    Reads a list of CommandOverride objects from immediately beneath the parent.

    We read the following individual fields:

      command                 command
      absolutePath            abs_path
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of CommandOverride objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseHooks(parentNode)
    Static Method

    source code 

    Reads a list of ActionHook objects from immediately beneath the parent.

    We read the following individual fields:

      action                  action
      command                 command
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of ActionHook objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseCollectFiles(parentNode)
    Static Method

    source code 

    Reads a list of CollectFile objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             mode I{or} collect_mode
      archiveMode             archive_mode
    

    The collect mode is a special case. Just a mode tag is accepted, but we prefer collect_mode for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only mode will be used.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of CollectFile objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parseCollectDirs(parentNode)
    Static Method

    source code 

    Reads a list of CollectDir objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            abs_path
      collectMode             mode I{or} collect_mode
      archiveMode             archive_mode
      ignoreFile              ignore_file
      linkDepth               link_depth
      dereference             dereference
      recursionLevel          recursion_level
    

    The collect mode is a special case. Just a mode tag is accepted for backwards compatibility, but we prefer collect_mode for consistency with the rest of the config file and to avoid confusion with the archive mode. If both are provided, only mode will be used.

    We also read groups of the following items, one list element per item:

      absoluteExcludePaths    exclude/abs_path
      relativeExcludePaths    exclude/rel_path
      excludePatterns         exclude/pattern
    

    The exclusions are parsed by _parseExclusions.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of CollectDir objects or None if none are found.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _parsePurgeDirs(parentNode)
    Static Method

    source code 

    Reads a list of PurgeDir objects from immediately beneath the parent.

    We read the following individual fields:

      absolutePath            <baseExpr>/abs_path
      retainDays              <baseExpr>/retain_days
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    List of PurgeDir objects or None if none are found.
    Raises:
    • ValueError - If the data at the location can't be read

    _parsePeerList(parentNode)
    Static Method

    source code 

    Reads remote and local peer data from immediately beneath the parent.

    We read the following individual fields for both remote and local peers:

      name        name
      collectDir  collect_dir
    

    We also read the following individual fields for remote peers only:

      remoteUser     backup_user
      rcpCommand     rcp_command
      rshCommand     rsh_command
      cbackCommand   cback_command
      managed        managed
      managedActions managed_actions
    

    Additionally, the value in the type field is used to determine whether this entry is a remote peer. If the type is "remote", it's a remote peer, and if the type is "local", it's a remote peer.

    If there are none of one type of peer (i.e. no local peers) then None will be returned for that item in the tuple.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    Tuple of (local, remote) peer lists.
    Raises:
    • ValueError - If the data at the location can't be read

    _parseDependencies(parentNode)
    Static Method

    source code 

    Reads extended action dependency information from a parent node.

    We read the following individual fields:

      runBefore   depends/run_before
      runAfter    depends/run_after
    

    Each of these fields is a comma-separated list of action names.

    The result is placed into an ActionDependencies object.

    If the dependencies parent node does not exist, None will be returned. Otherwise, an ActionDependencies object will always be created, even if it does not contain any actual dependencies in it.

    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    ActionDependencies object or None.
    Raises:
    • ValueError - If the data at the location can't be read

    _parseBlankBehavior(parentNode)
    Static Method

    source code 

    Reads a single BlankBehavior object from immediately beneath the parent.

    We read the following individual fields:

      blankMode     blank_behavior/mode
      blankFactor   blank_behavior/factor
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    BlankBehavior object or None if none if the section is not found
    Raises:
    • ValueError - If some filled-in value is invalid.

    _extractXml(self)

    source code 

    Internal method to extract configuration into an XML string.

    This method assumes that the internal validate method has been called prior to extracting the XML, if the caller cares. No validation will be done internally.

    As a general rule, fields that are set to None will be extracted into the document as empty tags. The same goes for container tags that are filled based on lists - if the list is empty or None, the container tag will be empty.

    _addReference(xmlDom, parentNode, referenceConfig)
    Static Method

    source code 

    Adds a <reference> configuration section as the next child of a parent.

    We add the following fields to the document:

      author         //cb_config/reference/author
      revision       //cb_config/reference/revision
      description    //cb_config/reference/description
      generator      //cb_config/reference/generator
    

    If referenceConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • referenceConfig - Reference configuration section to be added to the document.

    _addExtensions(xmlDom, parentNode, extensionsConfig)
    Static Method

    source code 

    Adds an <extensions> configuration section as the next child of a parent.

    We add the following fields to the document:

      order_mode     //cb_config/extensions/order_mode
    

    We also add groups of the following items, one list element per item:

      actions        //cb_config/extensions/action
    

    The extended action entries are added by _addExtendedAction.

    If extensionsConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • extensionsConfig - Extensions configuration section to be added to the document.

    _addOptions(xmlDom, parentNode, optionsConfig)
    Static Method

    source code 

    Adds a <options> configuration section as the next child of a parent.

    We add the following fields to the document:

      startingDay    //cb_config/options/starting_day
      workingDir     //cb_config/options/working_dir
      backupUser     //cb_config/options/backup_user
      backupGroup    //cb_config/options/backup_group
      rcpCommand     //cb_config/options/rcp_command
      rshCommand     //cb_config/options/rsh_command
      cbackCommand   //cb_config/options/cback_command
      managedActions //cb_config/options/managed_actions
    

    We also add groups of the following items, one list element per item:

      overrides      //cb_config/options/override
      hooks          //cb_config/options/pre_action_hook
      hooks          //cb_config/options/post_action_hook
    

    The individual override items are added by _addOverride. The individual hook items are added by _addHook.

    If optionsConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • optionsConfig - Options configuration section to be added to the document.

    _addPeers(xmlDom, parentNode, peersConfig)
    Static Method

    source code 

    Adds a <peers> configuration section as the next child of a parent.

    We add groups of the following items, one list element per item:

      localPeers     //cb_config/peers/peer
      remotePeers    //cb_config/peers/peer
    

    The individual local and remote peer entries are added by _addLocalPeer and _addRemotePeer, respectively.

    If peersConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • peersConfig - Peers configuration section to be added to the document.

    _addCollect(xmlDom, parentNode, collectConfig)
    Static Method

    source code 

    Adds a <collect> configuration section as the next child of a parent.

    We add the following fields to the document:

      targetDir            //cb_config/collect/collect_dir
      collectMode          //cb_config/collect/collect_mode
      archiveMode          //cb_config/collect/archive_mode
      ignoreFile           //cb_config/collect/ignore_file
    

    We also add groups of the following items, one list element per item:

      absoluteExcludePaths //cb_config/collect/exclude/abs_path
      excludePatterns      //cb_config/collect/exclude/pattern
      collectFiles         //cb_config/collect/file
      collectDirs          //cb_config/collect/dir
    

    The individual collect files are added by _addCollectFile and individual collect directories are added by _addCollectDir.

    If collectConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • collectConfig - Collect configuration section to be added to the document.

    _addStage(xmlDom, parentNode, stageConfig)
    Static Method

    source code 

    Adds a <stage> configuration section as the next child of a parent.

    We add the following fields to the document:

      targetDir      //cb_config/stage/staging_dir
    

    We also add groups of the following items, one list element per item:

      localPeers     //cb_config/stage/peer
      remotePeers    //cb_config/stage/peer
    

    The individual local and remote peer entries are added by _addLocalPeer and _addRemotePeer, respectively.

    If stageConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • stageConfig - Stage configuration section to be added to the document.

    _addStore(xmlDom, parentNode, storeConfig)
    Static Method

    source code 

    Adds a <store> configuration section as the next child of a parent.

    We add the following fields to the document:

      sourceDir         //cb_config/store/source_dir
      mediaType         //cb_config/store/media_type
      deviceType        //cb_config/store/device_type
      devicePath        //cb_config/store/target_device
      deviceScsiId      //cb_config/store/target_scsi_id
      driveSpeed        //cb_config/store/drive_speed
      checkData         //cb_config/store/check_data
      checkMedia        //cb_config/store/check_media
      warnMidnite       //cb_config/store/warn_midnite
      noEject           //cb_config/store/no_eject
      refreshMediaDelay //cb_config/store/refresh_media_delay
      ejectDelay        //cb_config/store/eject_delay
    

    Blanking behavior configuration is added by the _addBlankBehavior method.

    If storeConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • storeConfig - Store configuration section to be added to the document.

    _addPurge(xmlDom, parentNode, purgeConfig)
    Static Method

    source code 

    Adds a <purge> configuration section as the next child of a parent.

    We add the following fields to the document:

      purgeDirs     //cb_config/purge/dir
    

    The individual directory entries are added by _addPurgeDir.

    If purgeConfig is None, then no container will be added.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • purgeConfig - Purge configuration section to be added to the document.

    _addExtendedAction(xmlDom, parentNode, action)
    Static Method

    source code 

    Adds an extended action container as the next child of a parent.

    We add the following fields to the document:

      name           action/name
      module         action/module
      function       action/function
      index          action/index
      dependencies   action/depends
    

    Dependencies are added by the _addDependencies method.

    The <action> node itself is created as the next child of the parent node. This method only adds one action node. The parent must loop for each action in the ExtensionsConfig object.

    If action is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • action - Purge directory to be added to the document.

    _addOverride(xmlDom, parentNode, override)
    Static Method

    source code 

    Adds a command override container as the next child of a parent.

    We add the following fields to the document:

      command                 override/command
      absolutePath            override/abs_path
    

    The <override> node itself is created as the next child of the parent node. This method only adds one override node. The parent must loop for each override in the OptionsConfig object.

    If override is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • override - Command override to be added to the document.

    _addHook(xmlDom, parentNode, hook)
    Static Method

    source code 

    Adds an action hook container as the next child of a parent.

    The behavior varies depending on the value of the before and after flags on the hook. If the before flag is set, it's a pre-action hook, and we'll add the following fields:

      action                  pre_action_hook/action
      command                 pre_action_hook/command
    

    If the after flag is set, it's a post-action hook, and we'll add the following fields:

      action                  post_action_hook/action
      command                 post_action_hook/command
    

    The <pre_action_hook> or <post_action_hook> node itself is created as the next child of the parent node. This method only adds one hook node. The parent must loop for each hook in the OptionsConfig object.

    If hook is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • hook - Command hook to be added to the document.

    _addCollectFile(xmlDom, parentNode, collectFile)
    Static Method

    source code 

    Adds a collect file container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      collectMode             dir/collect_mode
      archiveMode             dir/archive_mode
    

    Note that for consistency with collect directory handling we'll only emit the preferred collect_mode tag.

    The <file> node itself is created as the next child of the parent node. This method only adds one collect file node. The parent must loop for each collect file in the CollectConfig object.

    If collectFile is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • collectFile - Collect file to be added to the document.

    _addCollectDir(xmlDom, parentNode, collectDir)
    Static Method

    source code 

    Adds a collect directory container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      collectMode             dir/collect_mode
      archiveMode             dir/archive_mode
      ignoreFile              dir/ignore_file
      linkDepth               dir/link_depth
      dereference             dir/dereference
      recursionLevel          dir/recursion_level
    

    Note that an original XML document might have listed the collect mode using the mode tag, since we accept both collect_mode and mode. However, here we'll only emit the preferred collect_mode tag.

    We also add groups of the following items, one list element per item:

      absoluteExcludePaths    dir/exclude/abs_path
      relativeExcludePaths    dir/exclude/rel_path
      excludePatterns         dir/exclude/pattern
    

    The <dir> node itself is created as the next child of the parent node. This method only adds one collect directory node. The parent must loop for each collect directory in the CollectConfig object.

    If collectDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • collectDir - Collect directory to be added to the document.

    _addLocalPeer(xmlDom, parentNode, localPeer)
    Static Method

    source code 

    Adds a local peer container as the next child of a parent.

    We add the following fields to the document:

      name                peer/name
      collectDir          peer/collect_dir
      ignoreFailureMode   peer/ignore_failures
    

    Additionally, peer/type is filled in with "local", since this is a local peer.

    The <peer> node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the StageConfig object.

    If localPeer is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • localPeer - Purge directory to be added to the document.

    _addRemotePeer(xmlDom, parentNode, remotePeer)
    Static Method

    source code 

    Adds a remote peer container as the next child of a parent.

    We add the following fields to the document:

      name                peer/name
      collectDir          peer/collect_dir
      remoteUser          peer/backup_user
      rcpCommand          peer/rcp_command
      rcpCommand          peer/rcp_command
      rshCommand          peer/rsh_command
      cbackCommand        peer/cback_command
      ignoreFailureMode   peer/ignore_failures
      managed             peer/managed
      managedActions      peer/managed_actions
    

    Additionally, peer/type is filled in with "remote", since this is a remote peer.

    The <peer> node itself is created as the next child of the parent node. This method only adds one peer node. The parent must loop for each peer in the StageConfig object.

    If remotePeer is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • remotePeer - Purge directory to be added to the document.

    _addPurgeDir(xmlDom, parentNode, purgeDir)
    Static Method

    source code 

    Adds a purge directory container as the next child of a parent.

    We add the following fields to the document:

      absolutePath            dir/abs_path
      retainDays              dir/retain_days
    

    The <dir> node itself is created as the next child of the parent node. This method only adds one purge directory node. The parent must loop for each purge directory in the PurgeConfig object.

    If purgeDir is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • purgeDir - Purge directory to be added to the document.

    _addDependencies(xmlDom, parentNode, dependencies)
    Static Method

    source code 

    Adds a extended action dependencies to parent node.

    We add the following fields to the document:

      runBefore      depends/run_before
      runAfter       depends/run_after
    

    If dependencies is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • dependencies - ActionDependencies object to be added to the document

    _buildCommaSeparatedString(valueList)
    Static Method

    source code 

    Creates a comma-separated string from a list of values.

    As a special case, if valueList is None, then None will be returned.

    Parameters:
    • valueList - List of values to be placed into a string
    Returns:
    Values from valueList as a comma-separated string.

    _addBlankBehavior(xmlDom, parentNode, blankBehavior)
    Static Method

    source code 

    Adds a blanking behavior container as the next child of a parent.

    We add the following fields to the document:

      blankMode    blank_behavior/mode
      blankFactor  blank_behavior/factor
    

    The <blank_behavior> node itself is created as the next child of the parent node.

    If blankBehavior is None, this method call will be a no-op.

    Parameters:
    • xmlDom - DOM tree as from createOutputDom.
    • parentNode - Parent that the section should be appended to.
    • blankBehavior - Blanking behavior to be added to the document.

    _validateContents(self)

    source code 

    Validates configuration contents per rules discussed in module documentation.

    This is the second pass at validation. It ensures that any filled-in section contains valid data. Any sections which is not set to None is validated per the rules for that section, laid out in the module documentation (above).

    Raises:
    • ValueError - If configuration is invalid.

    _validateReference(self)

    source code 

    Validates reference configuration. There are currently no reference-related validations.

    Raises:
    • ValueError - If reference configuration is invalid.

    _validateExtensions(self)

    source code 

    Validates extensions configuration.

    The list of actions may be either None or an empty list [] if desired. Each extended action must include a name, a module, and a function.

    Then, if the order mode is None or "index", an index is required; and if the order mode is "dependency", dependency information is required.

    Raises:
    • ValueError - If reference configuration is invalid.

    _validateOptions(self)

    source code 

    Validates options configuration.

    All fields must be filled in except the rsh command. The rcp and rsh commands are used as default values for all remote peers. Remote peers can also rely on the backup user as the default remote user name if they choose.

    Raises:
    • ValueError - If reference configuration is invalid.

    _validatePeers(self)

    source code 

    Validates peers configuration per rules in _validatePeerList.

    Raises:
    • ValueError - If peers configuration is invalid.

    _validateCollect(self)

    source code 

    Validates collect configuration.

    The target directory must be filled in. The collect mode, archive mode, ignore file, and recursion level are all optional. The list of absolute paths to exclude and patterns to exclude may be either None or an empty list [] if desired.

    Each collect directory entry must contain an absolute path to collect, and then must either be able to take collect mode, archive mode and ignore file configuration from the parent CollectConfig object, or must set each value on its own. The list of absolute paths to exclude, relative paths to exclude and patterns to exclude may be either None or an empty list [] if desired. Any list of absolute paths to exclude or patterns to exclude will be combined with the same list in the CollectConfig object to make the complete list for a given directory.

    Raises:
    • ValueError - If collect configuration is invalid.

    _validateStage(self)

    source code 

    Validates stage configuration.

    The target directory must be filled in, and the peers are also validated.

    Peers are only required in this section if the peers configuration section is not filled in. However, if any peers are filled in here, they override the peers configuration and must meet the validation criteria in _validatePeerList.

    Raises:
    • ValueError - If stage configuration is invalid.

    _validateStore(self)

    source code 

    Validates store configuration.

    The device type, drive speed, and blanking behavior are optional. All other values are required. Missing booleans will be set to defaults.

    If blanking behavior is provided, then both a blanking mode and a blanking factor are required.

    The image writer functionality in the writer module is supposed to be able to handle a device speed of None.

    Any caller which needs a "real" (non-None) value for the device type can use DEFAULT_DEVICE_TYPE, which is guaranteed to be sensible.

    This is also where we make sure that the media type -- which is already a valid type -- matches up properly with the device type.

    Raises:
    • ValueError - If store configuration is invalid.

    _validatePurge(self)

    source code 

    Validates purge configuration.

    The list of purge directories may be either None or an empty list [] if desired. All purge directories must contain a path and a retain days value.

    Raises:
    • ValueError - If purge configuration is invalid.

    _validatePeerList(self, localPeers, remotePeers)

    source code 

    Validates the set of local and remote peers.

    Local peers must be completely filled in, including both name and collect directory. Remote peers must also fill in the name and collect directory, but can leave the remote user and rcp command unset. In this case, the remote user is assumed to match the backup user from the options section and rcp command is taken directly from the options section.

    Parameters:
    • localPeers - List of local peers
    • remotePeers - List of remote peers
    Raises:
    • ValueError - If stage configuration is invalid.

    Property Details [hide private]

    reference

    Reference configuration in terms of a ReferenceConfig object.

    Get Method:
    _getReference(self) - Property target used to get the reference configuration value.
    Set Method:
    _setReference(self, value) - Property target used to set the reference configuration value.

    extensions

    Extensions configuration in terms of a ExtensionsConfig object.

    Get Method:
    _getExtensions(self) - Property target used to get the extensions configuration value.
    Set Method:
    _setExtensions(self, value) - Property target used to set the extensions configuration value.

    options

    Options configuration in terms of a OptionsConfig object.

    Get Method:
    _getOptions(self) - Property target used to get the options configuration value.
    Set Method:
    _setOptions(self, value) - Property target used to set the options configuration value.

    collect

    Collect configuration in terms of a CollectConfig object.

    Get Method:
    _getCollect(self) - Property target used to get the collect configuration value.
    Set Method:
    _setCollect(self, value) - Property target used to set the collect configuration value.

    stage

    Stage configuration in terms of a StageConfig object.

    Get Method:
    _getStage(self) - Property target used to get the stage configuration value.
    Set Method:
    _setStage(self, value) - Property target used to set the stage configuration value.

    store

    Store configuration in terms of a StoreConfig object.

    Get Method:
    _getStore(self) - Property target used to get the store configuration value.
    Set Method:
    _setStore(self, value) - Property target used to set the store configuration value.

    purge

    Purge configuration in terms of a PurgeConfig object.

    Get Method:
    _getPurge(self) - Property target used to get the purge configuration value.
    Set Method:
    _setPurge(self, value) - Property target used to set the purge configuration value.

    peers

    Peers configuration in terms of a PeersConfig object.

    Get Method:
    _getPeers(self) - Property target used to get the peers configuration value.
    Set Method:
    _setPeers(self, value) - Property target used to set the peers configuration value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mbox.MboxDir-class.html0000664000175000017500000010570412642035644030314 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox.MboxDir
    Package CedarBackup2 :: Package extend :: Module mbox :: Class MboxDir
    [hide private]
    [frames] | no frames]

    Class MboxDir

    source code

    object --+
             |
            MboxDir
    

    Class representing mbox directory configuration..

    The following restrictions exist on data in this class:

    • The absolute path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.

    Unlike collect directory configuration, this is the only place exclusions are allowed (no global exclusions at the <mbox> configuration level). Also, we only allow relative exclusions and there is no configured ignore file. This is because mbox directory backups are not recursive.

    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    Constructor for the MboxDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setRelativeExcludePaths(self, value)
    Property target used to set the relative exclude paths list.
    source code
     
    _getRelativeExcludePaths(self)
    Property target used to get the relative exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path to the mbox directory.
      collectMode
    Overridden collect mode for this mbox directory.
      compressMode
    Overridden compress mode for this mbox directory.
      relativeExcludePaths
    List of relative paths to exclude.
      excludePatterns
    List of regular expression patterns to exclude.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    (Constructor)

    source code 

    Constructor for the MboxDir class.

    You should never directly instantiate this class.

    Parameters:
    • absolutePath - Absolute path to a mbox file on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    • relativeExcludePaths - List of relative paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setRelativeExcludePaths(self, value)

    source code 

    Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    absolutePath

    Absolute path to the mbox directory.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this mbox directory.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this mbox directory.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    relativeExcludePaths

    List of relative paths to exclude.

    Get Method:
    _getRelativeExcludePaths(self) - Property target used to get the relative exclude paths list.
    Set Method:
    _setRelativeExcludePaths(self, value) - Property target used to set the relative exclude paths list.

    excludePatterns

    List of regular expression patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.subversion.Repository-class.html0000664000175000017500000010006712642035644032356 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.Repository
    Package CedarBackup2 :: Package extend :: Module subversion :: Class Repository
    [hide private]
    [frames] | no frames]

    Class Repository

    source code

    object --+
             |
            Repository
    
    Known Subclasses:

    Class representing generic Subversion repository configuration..

    The following restrictions exist on data in this class:

    • The respository path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.

    The repository type value is kept around just for reference. It doesn't affect the behavior of the backup.

    Instance Methods [hide private]
     
    __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None)
    Constructor for the Repository class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setRepositoryType(self, value)
    Property target used to set the repository type.
    source code
     
    _getRepositoryType(self)
    Property target used to get the repository type.
    source code
     
    _setRepositoryPath(self, value)
    Property target used to set the repository path.
    source code
     
    _getRepositoryPath(self)
    Property target used to get the repository path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      repositoryPath
    Path to the repository to collect.
      collectMode
    Overridden collect mode for this repository.
      compressMode
    Overridden compress mode for this repository.
      repositoryType
    Type of this repository, for reference.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryType=None, repositoryPath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the Repository class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • repositoryPath - Absolute path to a Subversion repository on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setRepositoryType(self, value)

    source code 

    Property target used to set the repository type. There is no validation; this value is kept around just for reference.

    _setRepositoryPath(self, value)

    source code 

    Property target used to set the repository path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    repositoryPath

    Path to the repository to collect.

    Get Method:
    _getRepositoryPath(self) - Property target used to get the repository path.
    Set Method:
    _setRepositoryPath(self, value) - Property target used to set the repository path.

    collectMode

    Overridden collect mode for this repository.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this repository.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    repositoryType

    Type of this repository, for reference.

    Get Method:
    _getRepositoryType(self) - Property target used to get the repository type.
    Set Method:
    _setRepositoryType(self, value) - Property target used to set the repository type.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.subversion.FSFSRepository-class.html0000664000175000017500000003377112642035644033047 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.FSFSRepository
    Package CedarBackup2 :: Package extend :: Module subversion :: Class FSFSRepository
    [hide private]
    [frames] | no frames]

    Class FSFSRepository

    source code

    object --+    
             |    
    Repository --+
                 |
                FSFSRepository
    

    Class representing Subversion FSFS repository configuration. This object is deprecated. Use a simple Repository instead.

    Instance Methods [hide private]
     
    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    Constructor for the FSFSRepository class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from Repository: __cmp__, __str__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from Repository: collectMode, compressMode, repositoryPath, repositoryType

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryPath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the FSFSRepository class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • repositoryPath - Absolute path to a Subversion repository on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writer-module.html0000664000175000017500000001320012642035643026176 0ustar pronovicpronovic00000000000000 CedarBackup2.writer
    Package CedarBackup2 :: Module writer
    [hide private]
    [frames] | no frames]

    Module writer

    source code

    Provides interface backwards compatibility.

    In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      __package__ = 'CedarBackup2'
    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.util-module.html0000664000175000017500000001667112642035643026441 0ustar pronovicpronovic00000000000000 util

    Module util


    Classes

    AbsolutePathList
    Diagnostics
    DirectedGraph
    ObjectTypeList
    PathResolverSingleton
    Pipe
    RegexList
    RegexMatchList
    RestrictedContentList
    UnorderedList

    Functions

    buildNormalizedPath
    calculateFileAge
    changeOwnership
    checkUnique
    convertSize
    dereferenceLink
    deriveDayOfWeek
    deviceMounted
    displayBytes
    encodePath
    executeCommand
    getFunctionReference
    getUidGid
    isRunningAsRoot
    isStartOfWeek
    mount
    nullDevice
    parseCommaSeparatedString
    removeKeys
    resolveCommand
    sanitizeEnvironment
    sortDict
    splitCommandLine
    unmount

    Variables

    BYTES_PER_GBYTE
    BYTES_PER_KBYTE
    BYTES_PER_MBYTE
    BYTES_PER_SECTOR
    DEFAULT_LANGUAGE
    HOURS_PER_DAY
    ISO_SECTOR_SIZE
    KBYTES_PER_MBYTE
    LANG_VAR
    LOCALE_VARS
    MBYTES_PER_GBYTE
    MINUTES_PER_HOUR
    MOUNT_COMMAND
    MTAB_FILE
    SECONDS_PER_DAY
    SECONDS_PER_MINUTE
    UMOUNT_COMMAND
    UNIT_BYTES
    UNIT_GBYTES
    UNIT_KBYTES
    UNIT_MBYTES
    UNIT_SECTORS
    __package__
    logger
    outputLogger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.PathResolverSingleton._Helper-class.html0000664000175000017500000002327412642035644033351 0ustar pronovicpronovic00000000000000 CedarBackup2.util.PathResolverSingleton._Helper
    Package CedarBackup2 :: Module util :: Class PathResolverSingleton :: Class _Helper
    [hide private]
    [frames] | no frames]

    Class _Helper

    source code

    object --+
             |
            PathResolverSingleton._Helper
    

    Helper class to provide a singleton factory method.

    Instance Methods [hide private]
     
    __init__(self)
    x.__init__(...) initializes x; see help(type(x)) for signature
    source code
     
    __call__(self, *args, **kw) source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    x.__init__(...) initializes x; see help(type(x)) for signature

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.action-pysrc.html0000664000175000017500000004520712642035644026027 0ustar pronovicpronovic00000000000000 CedarBackup2.action
    Package CedarBackup2 :: Module action
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.action

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Cedar Backup, release 2 
    14  # Purpose  : Provides implementation of various backup-related actions. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Provides interface backwards compatibility. 
    24   
    25  In Cedar Backup 2.10.0, a refactoring effort took place to reorganize the code 
    26  for the standard actions.  The code formerly in action.py was split into 
    27  various other files in the CedarBackup2.actions package.  This mostly-empty 
    28  file remains to preserve the Cedar Backup library interface. 
    29   
    30  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    31  """ 
    32   
    33  ######################################################################## 
    34  # Imported modules 
    35  ######################################################################## 
    36   
    37  # pylint: disable=W0611 
    38  from CedarBackup2.actions.collect import executeCollect 
    39  from CedarBackup2.actions.stage import executeStage 
    40  from CedarBackup2.actions.store import executeStore 
    41  from CedarBackup2.actions.purge import executePurge 
    42  from CedarBackup2.actions.rebuild import executeRebuild 
    43  from CedarBackup2.actions.validate import executeValidate 
    44   
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.OptionsConfig-class.html0000664000175000017500000017201412642035644030541 0ustar pronovicpronovic00000000000000 CedarBackup2.config.OptionsConfig
    Package CedarBackup2 :: Module config :: Class OptionsConfig
    [hide private]
    [frames] | no frames]

    Class OptionsConfig

    source code

    object --+
             |
            OptionsConfig
    

    Class representing a Cedar Backup global options configuration.

    The options section is used to store global configuration options and defaults that can be applied to other sections.

    The following restrictions exist on data in this class:

    • The working directory must be an absolute path.
    • The starting day must be a day of the week in English, i.e. "monday", "tuesday", etc.
    • All of the other values must be non-empty strings if they are set to something other than None.
    • The overrides list must be a list of CommandOverride objects.
    • The hooks list must be a list of ActionHook objects.
    • The cback command must be a non-empty string.
    • Any managed action name must be a non-empty string matching ACTION_NAME_REGEX
    Instance Methods [hide private]
     
    __init__(self, startingDay=None, workingDir=None, backupUser=None, backupGroup=None, rcpCommand=None, overrides=None, hooks=None, rshCommand=None, cbackCommand=None, managedActions=None)
    Constructor for the OptionsConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    addOverride(self, command, absolutePath)
    If no override currently exists for the command, add one.
    source code
     
    replaceOverride(self, command, absolutePath)
    If override currently exists for the command, replace it; otherwise add it.
    source code
     
    _setStartingDay(self, value)
    Property target used to set the starting day.
    source code
     
    _getStartingDay(self)
    Property target used to get the starting day.
    source code
     
    _setWorkingDir(self, value)
    Property target used to set the working directory.
    source code
     
    _getWorkingDir(self)
    Property target used to get the working directory.
    source code
     
    _setBackupUser(self, value)
    Property target used to set the backup user.
    source code
     
    _getBackupUser(self)
    Property target used to get the backup user.
    source code
     
    _setBackupGroup(self, value)
    Property target used to set the backup group.
    source code
     
    _getBackupGroup(self)
    Property target used to get the backup group.
    source code
     
    _setRcpCommand(self, value)
    Property target used to set the rcp command.
    source code
     
    _getRcpCommand(self)
    Property target used to get the rcp command.
    source code
     
    _setRshCommand(self, value)
    Property target used to set the rsh command.
    source code
     
    _getRshCommand(self)
    Property target used to get the rsh command.
    source code
     
    _setCbackCommand(self, value)
    Property target used to set the cback command.
    source code
     
    _getCbackCommand(self)
    Property target used to get the cback command.
    source code
     
    _setOverrides(self, value)
    Property target used to set the command path overrides list.
    source code
     
    _getOverrides(self)
    Property target used to get the command path overrides list.
    source code
     
    _setHooks(self, value)
    Property target used to set the pre- and post-action hooks list.
    source code
     
    _getHooks(self)
    Property target used to get the command path hooks list.
    source code
     
    _setManagedActions(self, value)
    Property target used to set the managed actions list.
    source code
     
    _getManagedActions(self)
    Property target used to get the managed actions list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      startingDay
    Day that starts the week.
      workingDir
    Working (temporary) directory to use for backups.
      backupUser
    Effective user that backups should run as.
      backupGroup
    Effective group that backups should run as.
      rcpCommand
    Default rcp-compatible copy command for staging.
      rshCommand
    Default rsh-compatible command to use for remote shells.
      overrides
    List of configured command path overrides, if any.
      cbackCommand
    Default cback-compatible command to use on managed remote peers.
      hooks
    List of configured pre- and post-action hooks.
      managedActions
    Default set of actions that are managed on remote peers.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, startingDay=None, workingDir=None, backupUser=None, backupGroup=None, rcpCommand=None, overrides=None, hooks=None, rshCommand=None, cbackCommand=None, managedActions=None)
    (Constructor)

    source code 

    Constructor for the OptionsConfig class.

    Parameters:
    • startingDay - Day that starts the week.
    • workingDir - Working (temporary) directory to use for backups.
    • backupUser - Effective user that backups should run as.
    • backupGroup - Effective group that backups should run as.
    • rcpCommand - Default rcp-compatible copy command for staging.
    • rshCommand - Default rsh-compatible command to use for remote shells.
    • cbackCommand - Default cback-compatible command to use on managed remote peers.
    • overrides - List of configured command path overrides, if any.
    • hooks - List of configured pre- and post-action hooks.
    • managedActions - Default set of actions that are managed on remote peers.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    addOverride(self, command, absolutePath)

    source code 

    If no override currently exists for the command, add one.

    Parameters:
    • command - Name of command to be overridden.
    • absolutePath - Absolute path of the overrridden command.

    replaceOverride(self, command, absolutePath)

    source code 

    If override currently exists for the command, replace it; otherwise add it.

    Parameters:
    • command - Name of command to be overridden.
    • absolutePath - Absolute path of the overrridden command.

    _setStartingDay(self, value)

    source code 

    Property target used to set the starting day. If it is not None, the value must be a valid English day of the week, one of "monday", "tuesday", "wednesday", etc.

    Raises:
    • ValueError - If the value is not a valid day of the week.

    _setWorkingDir(self, value)

    source code 

    Property target used to set the working directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setBackupUser(self, value)

    source code 

    Property target used to set the backup user. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setBackupGroup(self, value)

    source code 

    Property target used to set the backup group. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRcpCommand(self, value)

    source code 

    Property target used to set the rcp command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRshCommand(self, value)

    source code 

    Property target used to set the rsh command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCbackCommand(self, value)

    source code 

    Property target used to set the cback command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setOverrides(self, value)

    source code 

    Property target used to set the command path overrides list. Either the value must be None or each element must be a CommandOverride.

    Raises:
    • ValueError - If the value is not a CommandOverride

    _setHooks(self, value)

    source code 

    Property target used to set the pre- and post-action hooks list. Either the value must be None or each element must be an ActionHook.

    Raises:
    • ValueError - If the value is not a CommandOverride

    _setManagedActions(self, value)

    source code 

    Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    startingDay

    Day that starts the week.

    Get Method:
    _getStartingDay(self) - Property target used to get the starting day.
    Set Method:
    _setStartingDay(self, value) - Property target used to set the starting day.

    workingDir

    Working (temporary) directory to use for backups.

    Get Method:
    _getWorkingDir(self) - Property target used to get the working directory.
    Set Method:
    _setWorkingDir(self, value) - Property target used to set the working directory.

    backupUser

    Effective user that backups should run as.

    Get Method:
    _getBackupUser(self) - Property target used to get the backup user.
    Set Method:
    _setBackupUser(self, value) - Property target used to set the backup user.

    backupGroup

    Effective group that backups should run as.

    Get Method:
    _getBackupGroup(self) - Property target used to get the backup group.
    Set Method:
    _setBackupGroup(self, value) - Property target used to set the backup group.

    rcpCommand

    Default rcp-compatible copy command for staging.

    Get Method:
    _getRcpCommand(self) - Property target used to get the rcp command.
    Set Method:
    _setRcpCommand(self, value) - Property target used to set the rcp command.

    rshCommand

    Default rsh-compatible command to use for remote shells.

    Get Method:
    _getRshCommand(self) - Property target used to get the rsh command.
    Set Method:
    _setRshCommand(self, value) - Property target used to set the rsh command.

    overrides

    List of configured command path overrides, if any.

    Get Method:
    _getOverrides(self) - Property target used to get the command path overrides list.
    Set Method:
    _setOverrides(self, value) - Property target used to set the command path overrides list.

    cbackCommand

    Default cback-compatible command to use on managed remote peers.

    Get Method:
    _getCbackCommand(self) - Property target used to get the cback command.
    Set Method:
    _setCbackCommand(self, value) - Property target used to set the cback command.

    hooks

    List of configured pre- and post-action hooks.

    Get Method:
    _getHooks(self) - Property target used to get the command path hooks list.
    Set Method:
    _setHooks(self, value) - Property target used to set the pre- and post-action hooks list.

    managedActions

    Default set of actions that are managed on remote peers.

    Get Method:
    _getManagedActions(self) - Property target used to get the managed actions list.
    Set Method:
    _setManagedActions(self, value) - Property target used to set the managed actions list.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.split-module.html0000664000175000017500000004457612642035643027327 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.split
    Package CedarBackup2 :: Package extend :: Module split
    [hide private]
    [frames] | no frames]

    Module split

    source code

    Provides an extension to split up large files in staging directories.

    When this extension is executed, it will look through the configured Cedar Backup staging directory for files exceeding a specified size limit, and split them down into smaller files using the 'split' utility. Any directory which has already been split (as indicated by the cback.split file) will be ignored.

    This extension requires a new configuration section <split> and is intended to be run immediately after the standard stage action or immediately before the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      SplitConfig
    Class representing split configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the split backup action.
    source code
     
    _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup)
    Splits large files in a daily staging directory.
    source code
     
    _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False)
    Splits the source file into chunks of the indicated size.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.split")
      SPLIT_COMMAND = ['split']
      SPLIT_INDICATOR = 'cback.split'
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the split backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    _splitDailyDir(dailyDir, sizeLimit, splitSize, backupUser, backupGroup)

    source code 

    Splits large files in a daily staging directory.

    Files that match INDICATOR_PATTERNS (i.e. "cback.store", "cback.stage", etc.) are assumed to be indicator files and are ignored. All other files are split.

    Parameters:
    • dailyDir - Daily directory to encrypt
    • sizeLimit - Size limit, in bytes
    • splitSize - Split size, in bytes
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    Raises:
    • ValueError - If the encrypt mode is not supported.
    • ValueError - If the daily staging directory does not exist.

    _splitFile(sourcePath, splitSize, backupUser, backupGroup, removeSource=False)

    source code 

    Splits the source file into chunks of the indicated size.

    The split files will be owned by the indicated backup user and group. If removeSource is True, then the source file will be removed after it is successfully split.

    Parameters:
    • sourcePath - Absolute path of the source file to split
    • splitSize - Encryption mode (only "gpg" is allowed)
    • backupUser - User that target files should be owned by
    • backupGroup - Group that target files should be owned by
    • removeSource - Indicates whether to remove the source file
    Raises:
    • IOError - If there is a problem accessing, splitting or removing the source file.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.util.IsoImage-class.html0000664000175000017500000021463112642035644030665 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.util.IsoImage
    Package CedarBackup2 :: Package writers :: Module util :: Class IsoImage
    [hide private]
    [frames] | no frames]

    Class IsoImage

    source code

    object --+
             |
            IsoImage
    

    Represents an ISO filesystem image.

    Summary

    This object represents an ISO 9660 filesystem image. It is implemented in terms of the mkisofs program, which has been ported to many operating systems and platforms. A "sensible subset" of the mkisofs functionality is made available through the public interface, allowing callers to set a variety of basic options such as publisher id, application id, etc. as well as specify exactly which files and directories they want included in their image.

    By default, the image is created using the Rock Ridge protocol (using the -r option to mkisofs) because Rock Ridge discs are generally more useful on UN*X filesystems than standard ISO 9660 images. However, callers can fall back to the default mkisofs functionality by setting the useRockRidge instance variable to False. Note, however, that this option is not well-tested.

    Where Files and Directories are Placed in the Image

    Although this class is implemented in terms of the mkisofs program, its standard "image contents" semantics are slightly different than the original mkisofs semantics. The difference is that files and directories are added to the image with some additional information about their source directory kept intact.

    As an example, suppose you add the file /etc/profile to your image and you do not configure a graft point. The file /profile will be created in the image. The behavior for directories is similar. For instance, suppose that you add /etc/X11 to the image and do not configure a graft point. In this case, the directory /X11 will be created in the image, even if the original /etc/X11 directory is empty. This behavior differs from the standard mkisofs behavior!

    If a graft point is configured, it will be used to modify the point at which a file or directory is added into an image. Using the examples from above, let's assume you set a graft point of base when adding /etc/profile and /etc/X11 to your image. In this case, the file /base/profile and the directory /base/X11 would be added to the image.

    I feel that this behavior is more consistent than the original mkisofs behavior. However, to be fair, it is not quite as flexible, and some users might not like it. For this reason, the contentsOnly parameter to the addEntry method can be used to revert to the original behavior if desired.

    Instance Methods [hide private]
     
    __init__(self, device=None, boundaries=None, graftPoint=None)
    Initializes an empty ISO image object.
    source code
     
    addEntry(self, path, graftPoint=None, override=False, contentsOnly=False)
    Adds an individual file or directory into the ISO image.
    source code
     
    getEstimatedSize(self)
    Returns the estimated size (in bytes) of the ISO image.
    source code
     
    _getEstimatedSize(self, entries)
    Returns the estimated size (in bytes) for the passed-in entries dictionary.
    source code
     
    writeImage(self, imagePath)
    Writes this image to disk using the image path.
    source code
     
    _buildGeneralArgs(self)
    Builds a list of general arguments to be passed to a mkisofs command.
    source code
     
    _buildSizeArgs(self, entries)
    Builds a list of arguments to be passed to a mkisofs command.
    source code
     
    _buildWriteArgs(self, entries, imagePath)
    Builds a list of arguments to be passed to a mkisofs command.
    source code
     
    _setDevice(self, value)
    Property target used to set the device value.
    source code
     
    _getDevice(self)
    Property target used to get the device value.
    source code
     
    _setBoundaries(self, value)
    Property target used to set the boundaries tuple.
    source code
     
    _getBoundaries(self)
    Property target used to get the boundaries value.
    source code
     
    _setGraftPoint(self, value)
    Property target used to set the graft point.
    source code
     
    _getGraftPoint(self)
    Property target used to get the graft point.
    source code
     
    _setUseRockRidge(self, value)
    Property target used to set the use RockRidge flag.
    source code
     
    _getUseRockRidge(self)
    Property target used to get the use RockRidge flag.
    source code
     
    _setApplicationId(self, value)
    Property target used to set the application id.
    source code
     
    _getApplicationId(self)
    Property target used to get the application id.
    source code
     
    _setBiblioFile(self, value)
    Property target used to set the biblio file.
    source code
     
    _getBiblioFile(self)
    Property target used to get the biblio file.
    source code
     
    _setPublisherId(self, value)
    Property target used to set the publisher id.
    source code
     
    _getPublisherId(self)
    Property target used to get the publisher id.
    source code
     
    _setPreparerId(self, value)
    Property target used to set the preparer id.
    source code
     
    _getPreparerId(self)
    Property target used to get the preparer id.
    source code
     
    _setVolumeId(self, value)
    Property target used to set the volume id.
    source code
     
    _getVolumeId(self)
    Property target used to get the volume id.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _buildDirEntries(entries)
    Uses an entries dictionary to build a list of directory locations for use by mkisofs.
    source code
    Properties [hide private]
      device
    Device that image will be written to (device path or SCSI id).
      boundaries
    Session boundaries as required by mkisofs.
      graftPoint
    Default image-wide graft point (see addEntry for details).
      useRockRidge
    Indicates whether to use RockRidge (default is True).
      applicationId
    Optionally specifies the ISO header application id value.
      biblioFile
    Optionally specifies the ISO bibliographic file name.
      publisherId
    Optionally specifies the ISO header publisher id value.
      preparerId
    Optionally specifies the ISO header preparer id value.
      volumeId
    Optionally specifies the ISO header volume id value.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, device=None, boundaries=None, graftPoint=None)
    (Constructor)

    source code 

    Initializes an empty ISO image object.

    Only the most commonly-used configuration items can be set using this constructor. If you have a need to change the others, do so immediately after creating your object.

    The device and boundaries values are both required in order to write multisession discs. If either is missing or None, a multisession disc will not be written. The boundaries tuple is in terms of ISO sectors, as built by an image writer class and returned in a writer.MediaCapacity object.

    Parameters:
    • device (Either be a filesystem path or a SCSI address) - Name of the device that the image will be written to
    • boundaries (Tuple (last_sess_start,next_sess_start) as returned from cdrecord -msinfo, or None) - Session boundaries as required by mkisofs
    • graftPoint (String representing a graft point path (see addEntry).) - Default graft point for this page.
    Overrides: object.__init__

    addEntry(self, path, graftPoint=None, override=False, contentsOnly=False)

    source code 

    Adds an individual file or directory into the ISO image.

    The path must exist and must be a file or a directory. By default, the entry will be placed into the image at the root directory, but this behavior can be overridden using the graftPoint parameter or instance variable.

    You can use the contentsOnly behavior to revert to the "original" mkisofs behavior for adding directories, which is to add only the items within the directory, and not the directory itself.

    Parameters:
    • path (String representing a path on disk) - File or directory to be added to the image
    • graftPoint (String representing a graft point path, as described above) - Graft point to be used when adding this entry
    • override (Boolean true/false) - Override an existing entry with the same path.
    • contentsOnly (Boolean true/false) - Add directory contents only (standard mkisofs behavior).
    Raises:
    • ValueError - If path is not a file or directory, or does not exist.
    • ValueError - If the path has already been added, and override is not set.
    • ValueError - If a path cannot be encoded properly.
    Notes:
    • Things get odd if you try to add a directory to an image that will be written to a multisession disc, and the same directory already exists in an earlier session on that disc. Not all of the data gets written. You really wouldn't want to do this anyway, I guess.
    • An exception will be thrown if the path has already been added to the image, unless the override parameter is set to True.
    • The method graftPoints parameter overrides the object-wide instance variable. If neither the method parameter or object-wide value is set, the path will be written at the image root. The graft point behavior is determined by the value which is in effect at the time this method is called, so you must set the object-wide value before calling this method for the first time, or your image may not be consistent.
    • You cannot use the local graftPoint parameter to "turn off" an object-wide instance variable by setting it to None. Python's default argument functionality buys us a lot, but it can't make this method psychic. :)

    getEstimatedSize(self)

    source code 

    Returns the estimated size (in bytes) of the ISO image.

    This is implemented via the -print-size option to mkisofs, so it might take a bit of time to execute. However, the result is as accurate as we can get, since it takes into account all of the ISO overhead, the true cost of directories in the structure, etc, etc.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.
    • ValueError - If there are no filesystem entries in the image

    _getEstimatedSize(self, entries)

    source code 

    Returns the estimated size (in bytes) for the passed-in entries dictionary.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.

    writeImage(self, imagePath)

    source code 

    Writes this image to disk using the image path.

    Parameters:
    • imagePath (String representing a path on disk) - Path to write image out as
    Raises:
    • IOError - If there is an error writing the image to disk.
    • ValueError - If there are no filesystem entries in the image
    • ValueError - If a path cannot be encoded properly.

    _buildDirEntries(entries)
    Static Method

    source code 

    Uses an entries dictionary to build a list of directory locations for use by mkisofs.

    We build a list of entries that can be passed to mkisofs. Each entry is either raw (if no graft point was configured) or in graft-point form as described above (if a graft point was configured). The dictionary keys are the path names, and the values are the graft points, if any.

    Parameters:
    • entries - Dictionary of image entries (i.e. self.entries)
    Returns:
    List of directory locations for use by mkisofs

    _buildGeneralArgs(self)

    source code 

    Builds a list of general arguments to be passed to a mkisofs command.

    The various instance variables (applicationId, etc.) are filled into the list of arguments if they are set. By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested.

    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildSizeArgs(self, entries)

    source code 

    Builds a list of arguments to be passed to a mkisofs command.

    The various instance variables (applicationId, etc.) are filled into the list of arguments if they are set. The command will be built to just return size output (a simple count of sectors via the -print-size option), rather than an image file on disk.

    By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested.

    Parameters:
    • entries - Dictionary of image entries (i.e. self.entries)
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildWriteArgs(self, entries, imagePath)

    source code 

    Builds a list of arguments to be passed to a mkisofs command.

    The various instance variables (applicationId, etc.) are filled into the list of arguments if they are set. The command will be built to write an image to disk.

    By default, we will build a RockRidge disc. If you decide to change this, think hard about whether you know what you're doing. This option is not well-tested.

    Parameters:
    • entries - Dictionary of image entries (i.e. self.entries)
    • imagePath (String representing a path on disk) - Path to write image out as
    Returns:
    List suitable for passing to util.executeCommand as args.

    _setDevice(self, value)

    source code 

    Property target used to set the device value. If not None, the value can be either an absolute path or a SCSI id.

    Raises:
    • ValueError - If the value is not valid

    _setBoundaries(self, value)

    source code 

    Property target used to set the boundaries tuple. If not None, the value must be a tuple of two integers.

    Raises:
    • ValueError - If the tuple values are not integers.
    • IndexError - If the tuple does not contain enough elements.

    _setGraftPoint(self, value)

    source code 

    Property target used to set the graft point. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setUseRockRidge(self, value)

    source code 

    Property target used to set the use RockRidge flag. No validations, but we normalize the value to True or False.

    _setApplicationId(self, value)

    source code 

    Property target used to set the application id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setBiblioFile(self, value)

    source code 

    Property target used to set the biblio file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setPublisherId(self, value)

    source code 

    Property target used to set the publisher id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setPreparerId(self, value)

    source code 

    Property target used to set the preparer id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setVolumeId(self, value)

    source code 

    Property target used to set the volume id. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    Property Details [hide private]

    device

    Device that image will be written to (device path or SCSI id).

    Get Method:
    _getDevice(self) - Property target used to get the device value.
    Set Method:
    _setDevice(self, value) - Property target used to set the device value.

    boundaries

    Session boundaries as required by mkisofs.

    Get Method:
    _getBoundaries(self) - Property target used to get the boundaries value.
    Set Method:
    _setBoundaries(self, value) - Property target used to set the boundaries tuple.

    graftPoint

    Default image-wide graft point (see addEntry for details).

    Get Method:
    _getGraftPoint(self) - Property target used to get the graft point.
    Set Method:
    _setGraftPoint(self, value) - Property target used to set the graft point.

    useRockRidge

    Indicates whether to use RockRidge (default is True).

    Get Method:
    _getUseRockRidge(self) - Property target used to get the use RockRidge flag.
    Set Method:
    _setUseRockRidge(self, value) - Property target used to set the use RockRidge flag.

    applicationId

    Optionally specifies the ISO header application id value.

    Get Method:
    _getApplicationId(self) - Property target used to get the application id.
    Set Method:
    _setApplicationId(self, value) - Property target used to set the application id.

    biblioFile

    Optionally specifies the ISO bibliographic file name.

    Get Method:
    _getBiblioFile(self) - Property target used to get the biblio file.
    Set Method:
    _setBiblioFile(self, value) - Property target used to set the biblio file.

    publisherId

    Optionally specifies the ISO header publisher id value.

    Get Method:
    _getPublisherId(self) - Property target used to get the publisher id.
    Set Method:
    _setPublisherId(self, value) - Property target used to set the publisher id.

    preparerId

    Optionally specifies the ISO header preparer id value.

    Get Method:
    _getPreparerId(self) - Property target used to get the preparer id.
    Set Method:
    _setPreparerId(self, value) - Property target used to set the preparer id.

    volumeId

    Optionally specifies the ISO header volume id value.

    Get Method:
    _getVolumeId(self) - Property target used to get the volume id.
    Set Method:
    _setVolumeId(self, value) - Property target used to set the volume id.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions-module.html0000664000175000017500000002051512642035643026331 0ustar pronovicpronovic00000000000000 CedarBackup2.actions
    Package CedarBackup2 :: Package actions
    [hide private]
    [frames] | no frames]

    Package actions

    source code

    Cedar Backup actions.

    This package code related to the offical Cedar Backup actions (collect, stage, store, purge, rebuild, and validate).

    The action modules consist of mostly "glue" code that uses other lower-level functionality to actually implement a backup. There is one module for each high-level backup action, plus a module that provides shared constants.

    All of the public action function implement the Cedar Backup Extension Architecture Interface, i.e. the same interface that extensions implement.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.26.5/doc/interface/CedarBackup2.filesystem.BackupFileList-class.html0000664000175000017500000014302212642035644031535 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem.BackupFileList
    Package CedarBackup2 :: Module filesystem :: Class BackupFileList
    [hide private]
    [frames] | no frames]

    Class BackupFileList

    source code

    object --+        
             |        
          list --+    
                 |    
    FilesystemList --+
                     |
                    BackupFileList
    

    List of files to be backed up.

    A BackupFileList is a FilesystemList containing a list of files to be backed up. It only contains files, not directories (soft links are treated like files). On top of the generic functionality provided by FilesystemList, this class adds functionality to keep a hash (checksum) for each file in the list, and it also provides a method to calculate the total size of the files in the list and a way to export the list into tar form.

    Instance Methods [hide private]
    new empty list
    __init__(self)
    Initializes a list with no configured exclusions.
    source code
     
    addDir(self, path)
    Adds a directory to the list.
    source code
     
    totalSize(self)
    Returns the total size among all files in the list.
    source code
     
    generateSizeMap(self)
    Generates a mapping from file to file size in bytes.
    source code
     
    generateDigestMap(self, stripPrefix=None)
    Generates a mapping from file to file digest.
    source code
     
    generateFitted(self, capacity, algorithm='worst_fit')
    Generates a list of items that fit in the indicated capacity.
    source code
     
    generateTarfile(self, path, mode='tar', ignore=False, flat=False)
    Creates a tar file containing the files in the list.
    source code
     
    removeUnchanged(self, digestMap, captureDigest=False)
    Removes unchanged entries from the list.
    source code
     
    generateSpan(self, capacity, algorithm='worst_fit')
    Splits the list of items into sub-lists that fit in a given capacity.
    source code
     
    _getKnapsackTable(self, capacity=None)
    Converts the list into the form needed by the knapsack algorithms.
    source code

    Inherited from FilesystemList: addDirContents, addFile, normalize, removeDirs, removeFiles, removeInvalid, removeLinks, removeMatch, verify

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _generateDigest(path)
    Generates an SHA digest for a given file on disk.
    source code
     
    _getKnapsackFunction(algorithm)
    Returns a reference to the function associated with an algorithm name.
    source code
    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from FilesystemList: excludeBasenamePatterns, excludeDirs, excludeFiles, excludeLinks, excludePaths, excludePatterns, ignoreFile

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Initializes a list with no configured exclusions.

    Returns: new empty list
    Overrides: object.__init__

    addDir(self, path)

    source code 

    Adds a directory to the list.

    Note that this class does not allow directories to be added by themselves (a backup list contains only files). However, since links to directories are technically files, we allow them to be added.

    This method is implemented in terms of the superclass method, with one additional validation: the superclass method is only called if the passed-in path is both a directory and a link. All of the superclass's existing validations and restrictions apply.

    Parameters:
    • path (String representing a path on disk) - Directory path to be added to the list
    Returns:
    Number of items added to the list.
    Raises:
    • ValueError - If path is not a directory or does not exist.
    • ValueError - If the path could not be encoded properly.
    Overrides: FilesystemList.addDir

    totalSize(self)

    source code 

    Returns the total size among all files in the list. Only files are counted. Soft links that point at files are ignored. Entries which do not exist on disk are ignored.

    Returns:
    Total size, in bytes

    generateSizeMap(self)

    source code 

    Generates a mapping from file to file size in bytes. The mapping does include soft links, which are listed with size zero. Entries which do not exist on disk are ignored.

    Returns:
    Dictionary mapping file to file size

    generateDigestMap(self, stripPrefix=None)

    source code 

    Generates a mapping from file to file digest.

    Currently, the digest is an SHA hash, which should be pretty secure. In the future, this might be a different kind of hash, but we guarantee that the type of the hash will not change unless the library major version number is bumped.

    Entries which do not exist on disk are ignored.

    Soft links are ignored. We would end up generating a digest for the file that the soft link points at, which doesn't make any sense.

    If stripPrefix is passed in, then that prefix will be stripped from each key when the map is generated. This can be useful in generating two "relative" digest maps to be compared to one another.

    Parameters:
    • stripPrefix (String with any contents) - Common prefix to be stripped from paths
    Returns:
    Dictionary mapping file to digest value

    See Also: removeUnchanged

    generateFitted(self, capacity, algorithm='worst_fit')

    source code 

    Generates a list of items that fit in the indicated capacity.

    Sometimes, callers would like to include every item in a list, but are unable to because not all of the items fit in the space available. This method returns a copy of the list, containing only the items that fit in a given capacity. A copy is returned so that we don't lose any information if for some reason the fitted list is unsatisfactory.

    The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit.

    Parameters:
    • capacity (Integer, in bytes) - Maximum capacity among the files in the new list
    • algorithm (One of "first_fit", "best_fit", "worst_fit", "alternate_fit") - Knapsack (fit) algorithm to use
    Returns:
    Copy of list with total size no larger than indicated capacity
    Raises:
    • ValueError - If the algorithm is invalid.

    generateTarfile(self, path, mode='tar', ignore=False, flat=False)

    source code 

    Creates a tar file containing the files in the list.

    By default, this method will create uncompressed tar files. If you pass in mode 'targz', then it will create gzipped tar files, and if you pass in mode 'tarbz2', then it will create bzipped tar files.

    The tar file will be created as a GNU tar archive, which enables extended file name lengths, etc. Since GNU tar is so prevalent, I've decided that the extra functionality out-weighs the disadvantage of not being "standard".

    If you pass in flat=True, then a "flat" archive will be created, and all of the files will be added to the root of the archive. So, the file /tmp/something/whatever.txt would be added as just whatever.txt.

    By default, the whole method call fails if there are problems adding any of the files to the archive, resulting in an exception. Under these circumstances, callers are advised that they might want to call removeInvalid() and then attempt to extract the tar file a second time, since the most common cause of failures is a missing file (a file that existed when the list was built, but is gone again by the time the tar file is built).

    If you want to, you can pass in ignore=True, and the method will ignore errors encountered when adding individual files to the archive (but not errors opening and closing the archive itself).

    We'll always attempt to remove the tarfile from disk if an exception will be thrown.

    Parameters:
    • path (String representing a path on disk) - Path of tar file to create on disk
    • mode (One of either 'tar', 'targz' or 'tarbz2') - Tar creation mode
    • ignore (Boolean) - Indicates whether to ignore certain errors.
    • flat (Boolean) - Creates "flat" archive by putting all items in root
    Raises:
    • ValueError - If mode is not valid
    • ValueError - If list is empty
    • ValueError - If the path could not be encoded properly.
    • TarError - If there is a problem creating the tar file
    Notes:
    • No validation is done as to whether the entries in the list are files, since only files or soft links should be in an object like this. However, to be safe, everything is explicitly added to the tar archive non-recursively so it's safe to include soft links to directories.
    • The Python tarfile module, which is used internally here, is supposed to deal properly with long filenames and links. In my testing, I have found that it appears to be able to add long really long filenames to archives, but doesn't do a good job reading them back out, even out of an archive it created. Fortunately, all Cedar Backup does is add files to archives.

    removeUnchanged(self, digestMap, captureDigest=False)

    source code 

    Removes unchanged entries from the list.

    This method relies on a digest map as returned from generateDigestMap. For each entry in digestMap, if the entry also exists in the current list and the entry in the current list has the same digest value as in the map, the entry in the current list will be removed.

    This method offers a convenient way for callers to filter unneeded entries from a list. The idea is that a caller will capture a digest map from generateDigestMap at some point in time (perhaps the beginning of the week), and will save off that map using pickle or some other method. Then, the caller could use this method sometime in the future to filter out any unchanged files based on the saved-off map.

    If captureDigest is passed-in as True, then digest information will be captured for the entire list before the removal step occurs using the same rules as in generateDigestMap. The check will involve a lookup into the complete digest map.

    If captureDigest is passed in as False, we will only generate a digest value for files we actually need to check, and we'll ignore any entry in the list which isn't a file that currently exists on disk.

    The return value varies depending on captureDigest, as well. To preserve backwards compatibility, if captureDigest is False, then we'll just return a single value representing the number of entries removed. Otherwise, we'll return a tuple of (entries removed, digest map). The returned digest map will be in exactly the form returned by generateDigestMap.

    Parameters:
    • digestMap (Map as returned from generateDigestMap.) - Dictionary mapping file name to digest value.
    • captureDigest (Boolean) - Indicates that digest information should be captured.
    Returns:
    Results as discussed above (format varies based on arguments)

    Note: For performance reasons, this method actually ends up rebuilding the list from scratch. First, we build a temporary dictionary containing all of the items from the original list. Then, we remove items as needed from the dictionary (which is faster than the equivalent operation on a list). Finally, we replace the contents of the current list based on the keys left in the dictionary. This should be transparent to the caller.

    _generateDigest(path)
    Static Method

    source code 

    Generates an SHA digest for a given file on disk.

    The original code for this function used this simplistic implementation, which requires reading the entire file into memory at once in order to generate a digest value:

      sha.new(open(path).read()).hexdigest()
    

    Not surprisingly, this isn't an optimal solution. The Simple file hashing Python Cookbook recipe describes how to incrementally generate a hash value by reading in chunks of data rather than reading the file all at once. The recipe relies on the the update() method of the various Python hashing algorithms.

    In my tests using a 110 MB file on CD, the original implementation requires 111 seconds. This implementation requires only 40-45 seconds, which is a pretty substantial speed-up.

    Experience shows that reading in around 4kB (4096 bytes) at a time yields the best performance. Smaller reads are quite a bit slower, and larger reads don't make much of a difference. The 4kB number makes me a little suspicious, and I think it might be related to the size of a filesystem read at the hardware level. However, I've decided to just hardcode 4096 until I have evidence that shows it's worthwhile making the read size configurable.

    Parameters:
    • path - Path to generate digest for.
    Returns:
    ASCII-safe SHA digest for the file.
    Raises:
    • OSError - If the file cannot be opened.

    generateSpan(self, capacity, algorithm='worst_fit')

    source code 

    Splits the list of items into sub-lists that fit in a given capacity.

    Sometimes, callers need split to a backup file list into a set of smaller lists. For instance, you could use this to "span" the files across a set of discs.

    The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit.

    Parameters:
    • capacity (Integer, in bytes) - Maximum capacity among the files in the new list
    • algorithm (One of "first_fit", "best_fit", "worst_fit", "alternate_fit") - Knapsack (fit) algorithm to use
    Returns:
    List of SpanItem objects.
    Raises:
    • ValueError - If the algorithm is invalid.
    • ValueError - If it's not possible to fit some items

    Note: If any of your items are larger than the capacity, then it won't be possible to find a solution. In this case, a value error will be raised.

    _getKnapsackTable(self, capacity=None)

    source code 

    Converts the list into the form needed by the knapsack algorithms.

    Returns:
    Dictionary mapping file name to tuple of (file path, file size).

    _getKnapsackFunction(algorithm)
    Static Method

    source code 

    Returns a reference to the function associated with an algorithm name. Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit"

    Parameters:
    • algorithm - Name of the algorithm
    Returns:
    Reference to knapsack function
    Raises:
    • ValueError - If the algorithm name is unknown.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.peer-module.html0000664000175000017500000000405112642035643026404 0ustar pronovicpronovic00000000000000 peer

    Module peer


    Classes

    LocalPeer
    RemotePeer

    Variables

    DEF_CBACK_COMMAND
    DEF_COLLECT_INDICATOR
    DEF_RCP_COMMAND
    DEF_RSH_COMMAND
    DEF_STAGE_INDICATOR
    SU_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.StageConfig-class.html0000664000175000017500000007166012642035644030156 0ustar pronovicpronovic00000000000000 CedarBackup2.config.StageConfig
    Package CedarBackup2 :: Module config :: Class StageConfig
    [hide private]
    [frames] | no frames]

    Class StageConfig

    source code

    object --+
             |
            StageConfig
    

    Class representing a Cedar Backup stage configuration.

    The following restrictions exist on data in this class:

    • The target directory must be an absolute path
    • The list of local peers must contain only LocalPeer objects
    • The list of remote peers must contain only RemotePeer objects

    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, targetDir=None, localPeers=None, remotePeers=None)
    Constructor for the StageConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    hasPeers(self)
    Indicates whether any peers are filled into this object.
    source code
     
    _setTargetDir(self, value)
    Property target used to set the target directory.
    source code
     
    _getTargetDir(self)
    Property target used to get the target directory.
    source code
     
    _setLocalPeers(self, value)
    Property target used to set the local peers list.
    source code
     
    _getLocalPeers(self)
    Property target used to get the local peers list.
    source code
     
    _setRemotePeers(self, value)
    Property target used to set the remote peers list.
    source code
     
    _getRemotePeers(self)
    Property target used to get the remote peers list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      targetDir
    Directory to stage files into, by peer name.
      localPeers
    List of local peers.
      remotePeers
    List of remote peers.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, targetDir=None, localPeers=None, remotePeers=None)
    (Constructor)

    source code 

    Constructor for the StageConfig class.

    Parameters:
    • targetDir - Directory to stage files into, by peer name.
    • localPeers - List of local peers.
    • remotePeers - List of remote peers.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    hasPeers(self)

    source code 

    Indicates whether any peers are filled into this object.

    Returns:
    Boolean true if any local or remote peers are filled in, false otherwise.

    _setTargetDir(self, value)

    source code 

    Property target used to set the target directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setLocalPeers(self, value)

    source code 

    Property target used to set the local peers list. Either the value must be None or each element must be a LocalPeer.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setRemotePeers(self, value)

    source code 

    Property target used to set the remote peers list. Either the value must be None or each element must be a RemotePeer.

    Raises:
    • ValueError - If the value is not a RemotePeer

    Property Details [hide private]

    targetDir

    Directory to stage files into, by peer name.

    Get Method:
    _getTargetDir(self) - Property target used to get the target directory.
    Set Method:
    _setTargetDir(self, value) - Property target used to set the target directory.

    localPeers

    List of local peers.

    Get Method:
    _getLocalPeers(self) - Property target used to get the local peers list.
    Set Method:
    _setLocalPeers(self, value) - Property target used to set the local peers list.

    remotePeers

    List of remote peers.

    Get Method:
    _getRemotePeers(self) - Property target used to get the remote peers list.
    Set Method:
    _setRemotePeers(self, value) - Property target used to set the remote peers list.

    CedarBackup2-2.26.5/doc/interface/class-tree.html0000664000175000017500000005615412642035643023233 0ustar pronovicpronovic00000000000000 Class Hierarchy
     
    [hide private]
    [frames] | no frames]
    [ Module Hierarchy | Class Hierarchy ]

    Class Hierarchy

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config-pysrc.html0000664000175000017500000733177112642035646026034 0ustar pronovicpronovic00000000000000 CedarBackup2.config
    Package CedarBackup2 :: Module config
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.config

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 2 (>= 2.7) 
      29  # Project  : Cedar Backup, release 2 
      30  # Purpose  : Provides configuration-related objects. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides configuration-related objects. 
      40   
      41  Summary 
      42  ======= 
      43   
      44     Cedar Backup stores all of its configuration in an XML document typically 
      45     called C{cback.conf}.  The standard location for this document is in 
      46     C{/etc}, but users can specify a different location if they want to. 
      47   
      48     The C{Config} class is a Python object representation of a Cedar Backup XML 
      49     configuration file.  The representation is two-way: XML data can be used to 
      50     create a C{Config} object, and then changes to the object can be propogated 
      51     back to disk.  A C{Config} object can even be used to create a configuration 
      52     file from scratch programmatically. 
      53   
      54     The C{Config} class is intended to be the only Python-language interface to 
      55     Cedar Backup configuration on disk.  Cedar Backup will use the class as its 
      56     internal representation of configuration, and applications external to Cedar 
      57     Backup itself (such as a hypothetical third-party configuration tool written 
      58     in Python or a third party extension module) should also use the class when 
      59     they need to read and write configuration files. 
      60   
      61  Backwards Compatibility 
      62  ======================= 
      63   
      64     The configuration file format has changed between Cedar Backup 1.x and Cedar 
      65     Backup 2.x.  Any Cedar Backup 1.x configuration file is also a valid Cedar 
      66     Backup 2.x configuration file.  However, it doesn't work to go the other 
      67     direction, as the 2.x configuration files contains additional configuration 
      68     is not accepted by older versions of the software. 
      69   
      70  XML Configuration Structure 
      71  =========================== 
      72   
      73     A C{Config} object can either be created "empty", or can be created based on 
      74     XML input (either in the form of a string or read in from a file on disk). 
      75     Generally speaking, the XML input I{must} result in a C{Config} object which 
      76     passes the validations laid out below in the I{Validation} section. 
      77   
      78     An XML configuration file is composed of seven sections: 
      79   
      80        - I{reference}: specifies reference information about the file (author, revision, etc) 
      81        - I{extensions}: specifies mappings to Cedar Backup extensions (external code) 
      82        - I{options}: specifies global configuration options 
      83        - I{peers}: specifies the set of peers in a master's backup pool 
      84        - I{collect}: specifies configuration related to the collect action 
      85        - I{stage}: specifies configuration related to the stage action 
      86        - I{store}: specifies configuration related to the store action 
      87        - I{purge}: specifies configuration related to the purge action 
      88   
      89     Each section is represented by an class in this module, and then the overall 
      90     C{Config} class is a composition of the various other classes. 
      91   
      92     Any configuration section that is missing in the XML document (or has not 
      93     been filled into an "empty" document) will just be set to C{None} in the 
      94     object representation.  The same goes for individual fields within each 
      95     configuration section.  Keep in mind that the document might not be 
      96     completely valid if some sections or fields aren't filled in - but that 
      97     won't matter until validation takes place (see the I{Validation} section 
      98     below). 
      99   
     100  Unicode vs. String Data 
     101  ======================= 
     102   
     103     By default, all string data that comes out of XML documents in Python is 
     104     unicode data (i.e. C{u"whatever"}).  This is fine for many things, but when 
     105     it comes to filesystem paths, it can cause us some problems.  We really want 
     106     strings to be encoded in the filesystem encoding rather than being unicode. 
     107     So, most elements in configuration which represent filesystem paths are 
     108     coverted to plain strings using L{util.encodePath}.  The main exception is 
     109     the various C{absoluteExcludePath} and C{relativeExcludePath} lists.  These 
     110     are I{not} converted, because they are generally only used for filtering, 
     111     not for filesystem operations. 
     112   
     113  Validation 
     114  ========== 
     115   
     116     There are two main levels of validation in the C{Config} class and its 
     117     children.  The first is field-level validation.  Field-level validation 
     118     comes into play when a given field in an object is assigned to or updated. 
     119     We use Python's C{property} functionality to enforce specific validations on 
     120     field values, and in some places we even use customized list classes to 
     121     enforce validations on list members.  You should expect to catch a 
     122     C{ValueError} exception when making assignments to configuration class 
     123     fields. 
     124   
     125     The second level of validation is post-completion validation.  Certain 
     126     validations don't make sense until a document is fully "complete".  We don't 
     127     want these validations to apply all of the time, because it would make 
     128     building up a document from scratch a real pain.  For instance, we might 
     129     have to do things in the right order to keep from throwing exceptions, etc. 
     130   
     131     All of these post-completion validations are encapsulated in the 
     132     L{Config.validate} method.  This method can be called at any time by a 
     133     client, and will always be called immediately after creating a C{Config} 
     134     object from XML data and before exporting a C{Config} object to XML.  This 
     135     way, we get decent ease-of-use but we also don't accept or emit invalid 
     136     configuration files. 
     137   
     138     The L{Config.validate} implementation actually takes two passes to 
     139     completely validate a configuration document.  The first pass at validation 
     140     is to ensure that the proper sections are filled into the document.  There 
     141     are default requirements, but the caller has the opportunity to override 
     142     these defaults. 
     143   
     144     The second pass at validation ensures that any filled-in section contains 
     145     valid data.  Any section which is not set to C{None} is validated according 
     146     to the rules for that section (see below). 
     147   
     148     I{Reference Validations} 
     149   
     150     No validations. 
     151   
     152     I{Extensions Validations} 
     153   
     154     The list of actions may be either C{None} or an empty list C{[]} if desired. 
     155     Each extended action must include a name, a module and a function.  Then, an 
     156     extended action must include either an index or dependency information. 
     157     Which one is required depends on which order mode is configured. 
     158   
     159     I{Options Validations} 
     160   
     161     All fields must be filled in except the rsh command.  The rcp and rsh 
     162     commands are used as default values for all remote peers.  Remote peers can 
     163     also rely on the backup user as the default remote user name if they choose. 
     164   
     165     I{Peers Validations} 
     166   
     167     Local peers must be completely filled in, including both name and collect 
     168     directory.  Remote peers must also fill in the name and collect directory, 
     169     but can leave the remote user and rcp command unset.  In this case, the 
     170     remote user is assumed to match the backup user from the options section and 
     171     rcp command is taken directly from the options section. 
     172   
     173     I{Collect Validations} 
     174   
     175     The target directory must be filled in.  The collect mode, archive mode and 
     176     ignore file are all optional.  The list of absolute paths to exclude and 
     177     patterns to exclude may be either C{None} or an empty list C{[]} if desired. 
     178   
     179     Each collect directory entry must contain an absolute path to collect, and 
     180     then must either be able to take collect mode, archive mode and ignore file 
     181     configuration from the parent C{CollectConfig} object, or must set each 
     182     value on its own.  The list of absolute paths to exclude, relative paths to 
     183     exclude and patterns to exclude may be either C{None} or an empty list C{[]} 
     184     if desired.  Any list of absolute paths to exclude or patterns to exclude 
     185     will be combined with the same list in the C{CollectConfig} object to make 
     186     the complete list for a given directory. 
     187   
     188     I{Stage Validations} 
     189   
     190     The target directory must be filled in.  There must be at least one peer 
     191     (remote or local) between the two lists of peers.  A list with no entries 
     192     can be either C{None} or an empty list C{[]} if desired. 
     193   
     194     If a set of peers is provided, this configuration completely overrides 
     195     configuration in the peers configuration section, and the same validations 
     196     apply. 
     197   
     198     I{Store Validations} 
     199   
     200     The device type and drive speed are optional, and all other values are 
     201     required (missing booleans will be set to defaults, which is OK). 
     202   
     203     The image writer functionality in the C{writer} module is supposed to be 
     204     able to handle a device speed of C{None}.  Any caller which needs a "real" 
     205     (non-C{None}) value for the device type can use C{DEFAULT_DEVICE_TYPE}, 
     206     which is guaranteed to be sensible. 
     207   
     208     I{Purge Validations} 
     209   
     210     The list of purge directories may be either C{None} or an empty list C{[]} 
     211     if desired.  All purge directories must contain a path and a retain days 
     212     value. 
     213   
     214  @sort: ActionDependencies, ActionHook, PreActionHook, PostActionHook, 
     215         ExtendedAction, CommandOverride, CollectFile, CollectDir, PurgeDir, LocalPeer, 
     216         RemotePeer, ReferenceConfig, ExtensionsConfig, OptionsConfig, PeersConfig, 
     217         CollectConfig, StageConfig, StoreConfig, PurgeConfig, Config, 
     218         DEFAULT_DEVICE_TYPE, DEFAULT_MEDIA_TYPE, 
     219         VALID_DEVICE_TYPES, VALID_MEDIA_TYPES, 
     220         VALID_COLLECT_MODES, VALID_ARCHIVE_MODES, 
     221         VALID_ORDER_MODES 
     222   
     223  @var DEFAULT_DEVICE_TYPE: The default device type. 
     224  @var DEFAULT_MEDIA_TYPE: The default media type. 
     225  @var VALID_DEVICE_TYPES: List of valid device types. 
     226  @var VALID_MEDIA_TYPES: List of valid media types. 
     227  @var VALID_COLLECT_MODES: List of valid collect modes. 
     228  @var VALID_COMPRESS_MODES: List of valid compress modes. 
     229  @var VALID_ARCHIVE_MODES: List of valid archive modes. 
     230  @var VALID_ORDER_MODES: List of valid extension order modes. 
     231   
     232  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     233  """ 
     234   
     235  ######################################################################## 
     236  # Imported modules 
     237  ######################################################################## 
     238   
     239  # System modules 
     240  import os 
     241  import re 
     242  import logging 
     243   
     244  # Cedar Backup modules 
     245  from CedarBackup2.writers.util import validateScsiId, validateDriveSpeed 
     246  from CedarBackup2.util import UnorderedList, AbsolutePathList, ObjectTypeList, parseCommaSeparatedString 
     247  from CedarBackup2.util import RegexMatchList, RegexList, encodePath, checkUnique 
     248  from CedarBackup2.util import convertSize, displayBytes, UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES 
     249  from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild 
     250  from CedarBackup2.xmlutil import readStringList, readString, readInteger, readBoolean 
     251  from CedarBackup2.xmlutil import addContainerNode, addStringNode, addIntegerNode, addBooleanNode 
     252  from CedarBackup2.xmlutil import createInputDom, createOutputDom, serializeDom 
     253   
     254   
     255  ######################################################################## 
     256  # Module-wide constants and variables 
     257  ######################################################################## 
     258   
     259  logger = logging.getLogger("CedarBackup2.log.config") 
     260   
     261  DEFAULT_DEVICE_TYPE   = "cdwriter" 
     262  DEFAULT_MEDIA_TYPE    = "cdrw-74" 
     263   
     264  VALID_DEVICE_TYPES    = [ "cdwriter", "dvdwriter", ] 
     265  VALID_CD_MEDIA_TYPES  = [ "cdr-74", "cdrw-74", "cdr-80", "cdrw-80", ] 
     266  VALID_DVD_MEDIA_TYPES = [ "dvd+r", "dvd+rw", ] 
     267  VALID_MEDIA_TYPES     = VALID_CD_MEDIA_TYPES + VALID_DVD_MEDIA_TYPES 
     268  VALID_COLLECT_MODES   = [ "daily", "weekly", "incr", ] 
     269  VALID_ARCHIVE_MODES   = [ "tar", "targz", "tarbz2", ] 
     270  VALID_COMPRESS_MODES  = [ "none", "gzip", "bzip2", ] 
     271  VALID_ORDER_MODES     = [ "index", "dependency", ] 
     272  VALID_BLANK_MODES     = [ "daily", "weekly", ] 
     273  VALID_BYTE_UNITS      = [ UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, ] 
     274  VALID_FAILURE_MODES   = [ "none", "all", "daily", "weekly", ] 
     275   
     276  REWRITABLE_MEDIA_TYPES = [ "cdrw-74", "cdrw-80", "dvd+rw", ] 
     277   
     278  ACTION_NAME_REGEX     = r"^[a-z0-9]*$" 
    
    279 280 281 ######################################################################## 282 # ByteQuantity class definition 283 ######################################################################## 284 285 -class ByteQuantity(object):
    286 287 """ 288 Class representing a byte quantity. 289 290 A byte quantity has both a quantity and a byte-related unit. Units are 291 maintained using the constants from util.py. If no units are provided, 292 C{UNIT_BYTES} is assumed. 293 294 The quantity is maintained internally as a string so that issues of 295 precision can be avoided. It really isn't possible to store a floating 296 point number here while being able to losslessly translate back and forth 297 between XML and object representations. (Perhaps the Python 2.4 Decimal 298 class would have been an option, but I originally wanted to stay compatible 299 with Python 2.3.) 300 301 Even though the quantity is maintained as a string, the string must be in a 302 valid floating point positive number. Technically, any floating point 303 string format supported by Python is allowble. However, it does not make 304 sense to have a negative quantity of bytes in this context. 305 306 @sort: __init__, __repr__, __str__, __cmp__, quantity, units, bytes 307 """ 308
    309 - def __init__(self, quantity=None, units=None):
    310 """ 311 Constructor for the C{ByteQuantity} class. 312 313 @param quantity: Quantity of bytes, something interpretable as a float 314 @param units: Unit of bytes, one of VALID_BYTE_UNITS 315 316 @raise ValueError: If one of the values is invalid. 317 """ 318 self._quantity = None 319 self._units = None 320 self.quantity = quantity 321 self.units = units
    322
    323 - def __repr__(self):
    324 """ 325 Official string representation for class instance. 326 """ 327 return "ByteQuantity(%s, %s)" % (self.quantity, self.units)
    328
    329 - def __str__(self):
    330 """ 331 Informal string representation for class instance. 332 """ 333 return "%s" % displayBytes(self.bytes)
    334
    335 - def __cmp__(self, other):
    336 """ 337 Definition of equals operator for this class. 338 Lists within this class are "unordered" for equality comparisons. 339 @param other: Other object to compare to. 340 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 341 """ 342 if other is None: 343 return 1 344 elif isinstance(other, ByteQuantity): 345 if self.bytes != other.bytes: 346 if self.bytes < other.bytes: 347 return -1 348 else: 349 return 1 350 return 0 351 else: 352 return self.__cmp__(ByteQuantity(other, UNIT_BYTES)) # will fail if other can't be coverted to float
    353
    354 - def _setQuantity(self, value):
    355 """ 356 Property target used to set the quantity 357 The value must be interpretable as a float if it is not None 358 @raise ValueError: If the value is an empty string. 359 @raise ValueError: If the value is not a valid floating point number 360 @raise ValueError: If the value is less than zero 361 """ 362 if value is None: 363 self._quantity = None 364 else: 365 try: 366 floatValue = float(value) # allow integer, float, string, etc. 367 except: 368 raise ValueError("Quantity must be interpretable as a float") 369 if floatValue < 0.0: 370 raise ValueError("Quantity cannot be negative.") 371 self._quantity = str(value) # keep around string
    372
    373 - def _getQuantity(self):
    374 """ 375 Property target used to get the quantity. 376 """ 377 return self._quantity
    378
    379 - def _setUnits(self, value):
    380 """ 381 Property target used to set the units value. 382 If not C{None}, the units value must be one of the values in L{VALID_BYTE_UNITS}. 383 @raise ValueError: If the value is not valid. 384 """ 385 if value is None: 386 self._units = UNIT_BYTES 387 else: 388 if value not in VALID_BYTE_UNITS: 389 raise ValueError("Units value must be one of %s." % VALID_BYTE_UNITS) 390 self._units = value
    391
    392 - def _getUnits(self):
    393 """ 394 Property target used to get the units value. 395 """ 396 return self._units
    397
    398 - def _getBytes(self):
    399 """ 400 Property target used to return the byte quantity as a floating point number. 401 If there is no quantity set, then a value of 0.0 is returned. 402 """ 403 if self.quantity is not None and self.units is not None: 404 return convertSize(self.quantity, self.units, UNIT_BYTES) 405 return 0.0
    406 407 quantity = property(_getQuantity, _setQuantity, None, doc="Byte quantity, as a string") 408 units = property(_getUnits, _setUnits, None, doc="Units for byte quantity, for instance UNIT_BYTES") 409 bytes = property(_getBytes, None, None, doc="Byte quantity, as a floating point number.")
    410
    411 412 ######################################################################## 413 # ActionDependencies class definition 414 ######################################################################## 415 416 -class ActionDependencies(object):
    417 418 """ 419 Class representing dependencies associated with an extended action. 420 421 Execution ordering for extended actions is done in one of two ways: either by using 422 index values (lower index gets run first) or by having the extended action specify 423 dependencies in terms of other named actions. This class encapsulates the dependency 424 information for an extended action. 425 426 The following restrictions exist on data in this class: 427 428 - Any action name must be a non-empty string matching C{ACTION_NAME_REGEX} 429 430 @sort: __init__, __repr__, __str__, __cmp__, beforeList, afterList 431 """ 432
    433 - def __init__(self, beforeList=None, afterList=None):
    434 """ 435 Constructor for the C{ActionDependencies} class. 436 437 @param beforeList: List of named actions that this action must be run before 438 @param afterList: List of named actions that this action must be run after 439 440 @raise ValueError: If one of the values is invalid. 441 """ 442 self._beforeList = None 443 self._afterList = None 444 self.beforeList = beforeList 445 self.afterList = afterList
    446
    447 - def __repr__(self):
    448 """ 449 Official string representation for class instance. 450 """ 451 return "ActionDependencies(%s, %s)" % (self.beforeList, self.afterList)
    452
    453 - def __str__(self):
    454 """ 455 Informal string representation for class instance. 456 """ 457 return self.__repr__()
    458
    459 - def __cmp__(self, other):
    460 """ 461 Definition of equals operator for this class. 462 @param other: Other object to compare to. 463 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 464 """ 465 if other is None: 466 return 1 467 if self.beforeList != other.beforeList: 468 if self.beforeList < other.beforeList: 469 return -1 470 else: 471 return 1 472 if self.afterList != other.afterList: 473 if self.afterList < other.afterList: 474 return -1 475 else: 476 return 1 477 return 0
    478
    479 - def _setBeforeList(self, value):
    480 """ 481 Property target used to set the "run before" list. 482 Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. 483 @raise ValueError: If the value does not match the regular expression. 484 """ 485 if value is None: 486 self._beforeList = None 487 else: 488 try: 489 saved = self._beforeList 490 self._beforeList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 491 self._beforeList.extend(value) 492 except Exception, e: 493 self._beforeList = saved 494 raise e
    495
    496 - def _getBeforeList(self):
    497 """ 498 Property target used to get the "run before" list. 499 """ 500 return self._beforeList
    501
    502 - def _setAfterList(self, value):
    503 """ 504 Property target used to set the "run after" list. 505 Either the value must be C{None} or each element must be a string matching ACTION_NAME_REGEX. 506 @raise ValueError: If the value does not match the regular expression. 507 """ 508 if value is None: 509 self._afterList = None 510 else: 511 try: 512 saved = self._afterList 513 self._afterList = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 514 self._afterList.extend(value) 515 except Exception, e: 516 self._afterList = saved 517 raise e
    518
    519 - def _getAfterList(self):
    520 """ 521 Property target used to get the "run after" list. 522 """ 523 return self._afterList
    524 525 beforeList = property(_getBeforeList, _setBeforeList, None, "List of named actions that this action must be run before.") 526 afterList = property(_getAfterList, _setAfterList, None, "List of named actions that this action must be run after.")
    527
    528 529 ######################################################################## 530 # ActionHook class definition 531 ######################################################################## 532 533 -class ActionHook(object):
    534 535 """ 536 Class representing a hook associated with an action. 537 538 A hook associated with an action is a shell command to be executed either 539 before or after a named action is executed. 540 541 The following restrictions exist on data in this class: 542 543 - The action name must be a non-empty string matching C{ACTION_NAME_REGEX} 544 - The shell command must be a non-empty string. 545 546 The internal C{before} and C{after} instance variables are always set to 547 False in this parent class. 548 549 @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after 550 """ 551
    552 - def __init__(self, action=None, command=None):
    553 """ 554 Constructor for the C{ActionHook} class. 555 556 @param action: Action this hook is associated with 557 @param command: Shell command to execute 558 559 @raise ValueError: If one of the values is invalid. 560 """ 561 self._action = None 562 self._command = None 563 self._before = False 564 self._after = False 565 self.action = action 566 self.command = command
    567
    568 - def __repr__(self):
    569 """ 570 Official string representation for class instance. 571 """ 572 return "ActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after)
    573
    574 - def __str__(self):
    575 """ 576 Informal string representation for class instance. 577 """ 578 return self.__repr__()
    579
    580 - def __cmp__(self, other):
    581 """ 582 Definition of equals operator for this class. 583 @param other: Other object to compare to. 584 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 585 """ 586 if other is None: 587 return 1 588 if self.action != other.action: 589 if self.action < other.action: 590 return -1 591 else: 592 return 1 593 if self.command != other.command: 594 if self.command < other.command: 595 return -1 596 else: 597 return 1 598 if self.before != other.before: 599 if self.before < other.before: 600 return -1 601 else: 602 return 1 603 if self.after != other.after: 604 if self.after < other.after: 605 return -1 606 else: 607 return 1 608 return 0
    609
    610 - def _setAction(self, value):
    611 """ 612 Property target used to set the action name. 613 The value must be a non-empty string if it is not C{None}. 614 It must also consist only of lower-case letters and digits. 615 @raise ValueError: If the value is an empty string. 616 """ 617 pattern = re.compile(ACTION_NAME_REGEX) 618 if value is not None: 619 if len(value) < 1: 620 raise ValueError("The action name must be a non-empty string.") 621 if not pattern.search(value): 622 raise ValueError("The action name must consist of only lower-case letters and digits.") 623 self._action = value
    624
    625 - def _getAction(self):
    626 """ 627 Property target used to get the action name. 628 """ 629 return self._action
    630
    631 - def _setCommand(self, value):
    632 """ 633 Property target used to set the command. 634 The value must be a non-empty string if it is not C{None}. 635 @raise ValueError: If the value is an empty string. 636 """ 637 if value is not None: 638 if len(value) < 1: 639 raise ValueError("The command must be a non-empty string.") 640 self._command = value
    641
    642 - def _getCommand(self):
    643 """ 644 Property target used to get the command. 645 """ 646 return self._command
    647
    648 - def _getBefore(self):
    649 """ 650 Property target used to get the before flag. 651 """ 652 return self._before
    653
    654 - def _getAfter(self):
    655 """ 656 Property target used to get the after flag. 657 """ 658 return self._after
    659 660 action = property(_getAction, _setAction, None, "Action this hook is associated with.") 661 command = property(_getCommand, _setCommand, None, "Shell command to execute.") 662 before = property(_getBefore, None, None, "Indicates whether command should be executed before action.") 663 after = property(_getAfter, None, None, "Indicates whether command should be executed after action.")
    664
    665 -class PreActionHook(ActionHook):
    666 667 """ 668 Class representing a pre-action hook associated with an action. 669 670 A hook associated with an action is a shell command to be executed either 671 before or after a named action is executed. In this case, a pre-action hook 672 is executed before the named action. 673 674 The following restrictions exist on data in this class: 675 676 - The action name must be a non-empty string consisting of lower-case letters and digits. 677 - The shell command must be a non-empty string. 678 679 The internal C{before} instance variable is always set to True in this 680 class. 681 682 @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after 683 """ 684
    685 - def __init__(self, action=None, command=None):
    686 """ 687 Constructor for the C{PreActionHook} class. 688 689 @param action: Action this hook is associated with 690 @param command: Shell command to execute 691 692 @raise ValueError: If one of the values is invalid. 693 """ 694 ActionHook.__init__(self, action, command) 695 self._before = True
    696
    697 - def __repr__(self):
    698 """ 699 Official string representation for class instance. 700 """ 701 return "PreActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after)
    702
    703 -class PostActionHook(ActionHook):
    704 705 """ 706 Class representing a pre-action hook associated with an action. 707 708 A hook associated with an action is a shell command to be executed either 709 before or after a named action is executed. In this case, a post-action hook 710 is executed after the named action. 711 712 The following restrictions exist on data in this class: 713 714 - The action name must be a non-empty string consisting of lower-case letters and digits. 715 - The shell command must be a non-empty string. 716 717 The internal C{before} instance variable is always set to True in this 718 class. 719 720 @sort: __init__, __repr__, __str__, __cmp__, action, command, before, after 721 """ 722
    723 - def __init__(self, action=None, command=None):
    724 """ 725 Constructor for the C{PostActionHook} class. 726 727 @param action: Action this hook is associated with 728 @param command: Shell command to execute 729 730 @raise ValueError: If one of the values is invalid. 731 """ 732 ActionHook.__init__(self, action, command) 733 self._after = True
    734
    735 - def __repr__(self):
    736 """ 737 Official string representation for class instance. 738 """ 739 return "PostActionHook(%s, %s, %s, %s)" % (self.action, self.command, self.before, self.after)
    740
    741 742 ######################################################################## 743 # BlankBehavior class definition 744 ######################################################################## 745 746 -class BlankBehavior(object):
    747 748 """ 749 Class representing optimized store-action media blanking behavior. 750 751 The following restrictions exist on data in this class: 752 753 - The blanking mode must be a one of the values in L{VALID_BLANK_MODES} 754 - The blanking factor must be a positive floating point number 755 756 @sort: __init__, __repr__, __str__, __cmp__, blankMode, blankFactor 757 """ 758
    759 - def __init__(self, blankMode=None, blankFactor=None):
    760 """ 761 Constructor for the C{BlankBehavior} class. 762 763 @param blankMode: Blanking mode 764 @param blankFactor: Blanking factor 765 766 @raise ValueError: If one of the values is invalid. 767 """ 768 self._blankMode = None 769 self._blankFactor = None 770 self.blankMode = blankMode 771 self.blankFactor = blankFactor
    772
    773 - def __repr__(self):
    774 """ 775 Official string representation for class instance. 776 """ 777 return "BlankBehavior(%s, %s)" % (self.blankMode, self.blankFactor)
    778
    779 - def __str__(self):
    780 """ 781 Informal string representation for class instance. 782 """ 783 return self.__repr__()
    784
    785 - def __cmp__(self, other):
    786 """ 787 Definition of equals operator for this class. 788 @param other: Other object to compare to. 789 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 790 """ 791 if other is None: 792 return 1 793 if self.blankMode != other.blankMode: 794 if self.blankMode < other.blankMode: 795 return -1 796 else: 797 return 1 798 if self.blankFactor != other.blankFactor: 799 if self.blankFactor < other.blankFactor: 800 return -1 801 else: 802 return 1 803 return 0
    804
    805 - def _setBlankMode(self, value):
    806 """ 807 Property target used to set the blanking mode. 808 The value must be one of L{VALID_BLANK_MODES}. 809 @raise ValueError: If the value is not valid. 810 """ 811 if value is not None: 812 if value not in VALID_BLANK_MODES: 813 raise ValueError("Blanking mode must be one of %s." % VALID_BLANK_MODES) 814 self._blankMode = value
    815
    816 - def _getBlankMode(self):
    817 """ 818 Property target used to get the blanking mode. 819 """ 820 return self._blankMode
    821
    822 - def _setBlankFactor(self, value):
    823 """ 824 Property target used to set the blanking factor. 825 The value must be a non-empty string if it is not C{None}. 826 @raise ValueError: If the value is an empty string. 827 @raise ValueError: If the value is not a valid floating point number 828 @raise ValueError: If the value is less than zero 829 """ 830 if value is not None: 831 if len(value) < 1: 832 raise ValueError("Blanking factor must be a non-empty string.") 833 floatValue = float(value) 834 if floatValue < 0.0: 835 raise ValueError("Blanking factor cannot be negative.") 836 self._blankFactor = value # keep around string
    837
    838 - def _getBlankFactor(self):
    839 """ 840 Property target used to get the blanking factor. 841 """ 842 return self._blankFactor
    843 844 blankMode = property(_getBlankMode, _setBlankMode, None, "Blanking mode") 845 blankFactor = property(_getBlankFactor, _setBlankFactor, None, "Blanking factor")
    846
    847 848 ######################################################################## 849 # ExtendedAction class definition 850 ######################################################################## 851 852 -class ExtendedAction(object):
    853 854 """ 855 Class representing an extended action. 856 857 Essentially, an extended action needs to allow the following to happen:: 858 859 exec("from %s import %s" % (module, function)) 860 exec("%s(action, configPath")" % function) 861 862 The following restrictions exist on data in this class: 863 864 - The action name must be a non-empty string consisting of lower-case letters and digits. 865 - The module must be a non-empty string and a valid Python identifier. 866 - The function must be an on-empty string and a valid Python identifier. 867 - If set, the index must be a positive integer. 868 - If set, the dependencies attribute must be an C{ActionDependencies} object. 869 870 @sort: __init__, __repr__, __str__, __cmp__, name, module, function, index, dependencies 871 """ 872
    873 - def __init__(self, name=None, module=None, function=None, index=None, dependencies=None):
    874 """ 875 Constructor for the C{ExtendedAction} class. 876 877 @param name: Name of the extended action 878 @param module: Name of the module containing the extended action function 879 @param function: Name of the extended action function 880 @param index: Index of action, used for execution ordering 881 @param dependencies: Dependencies for action, used for execution ordering 882 883 @raise ValueError: If one of the values is invalid. 884 """ 885 self._name = None 886 self._module = None 887 self._function = None 888 self._index = None 889 self._dependencies = None 890 self.name = name 891 self.module = module 892 self.function = function 893 self.index = index 894 self.dependencies = dependencies
    895
    896 - def __repr__(self):
    897 """ 898 Official string representation for class instance. 899 """ 900 return "ExtendedAction(%s, %s, %s, %s, %s)" % (self.name, self.module, self.function, self.index, self.dependencies)
    901
    902 - def __str__(self):
    903 """ 904 Informal string representation for class instance. 905 """ 906 return self.__repr__()
    907
    908 - def __cmp__(self, other):
    909 """ 910 Definition of equals operator for this class. 911 @param other: Other object to compare to. 912 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 913 """ 914 if other is None: 915 return 1 916 if self.name != other.name: 917 if self.name < other.name: 918 return -1 919 else: 920 return 1 921 if self.module != other.module: 922 if self.module < other.module: 923 return -1 924 else: 925 return 1 926 if self.function != other.function: 927 if self.function < other.function: 928 return -1 929 else: 930 return 1 931 if self.index != other.index: 932 if self.index < other.index: 933 return -1 934 else: 935 return 1 936 if self.dependencies != other.dependencies: 937 if self.dependencies < other.dependencies: 938 return -1 939 else: 940 return 1 941 return 0
    942
    943 - def _setName(self, value):
    944 """ 945 Property target used to set the action name. 946 The value must be a non-empty string if it is not C{None}. 947 It must also consist only of lower-case letters and digits. 948 @raise ValueError: If the value is an empty string. 949 """ 950 pattern = re.compile(ACTION_NAME_REGEX) 951 if value is not None: 952 if len(value) < 1: 953 raise ValueError("The action name must be a non-empty string.") 954 if not pattern.search(value): 955 raise ValueError("The action name must consist of only lower-case letters and digits.") 956 self._name = value
    957
    958 - def _getName(self):
    959 """ 960 Property target used to get the action name. 961 """ 962 return self._name
    963
    964 - def _setModule(self, value):
    965 """ 966 Property target used to set the module name. 967 The value must be a non-empty string if it is not C{None}. 968 It must also be a valid Python identifier. 969 @raise ValueError: If the value is an empty string. 970 """ 971 pattern = re.compile(r"^([A-Za-z_][A-Za-z0-9_]*)(\.[A-Za-z_][A-Za-z0-9_]*)*$") 972 if value is not None: 973 if len(value) < 1: 974 raise ValueError("The module name must be a non-empty string.") 975 if not pattern.search(value): 976 raise ValueError("The module name must be a valid Python identifier.") 977 self._module = value
    978
    979 - def _getModule(self):
    980 """ 981 Property target used to get the module name. 982 """ 983 return self._module
    984
    985 - def _setFunction(self, value):
    986 """ 987 Property target used to set the function name. 988 The value must be a non-empty string if it is not C{None}. 989 It must also be a valid Python identifier. 990 @raise ValueError: If the value is an empty string. 991 """ 992 pattern = re.compile(r"^[A-Za-z_][A-Za-z0-9_]*$") 993 if value is not None: 994 if len(value) < 1: 995 raise ValueError("The function name must be a non-empty string.") 996 if not pattern.search(value): 997 raise ValueError("The function name must be a valid Python identifier.") 998 self._function = value
    999
    1000 - def _getFunction(self):
    1001 """ 1002 Property target used to get the function name. 1003 """ 1004 return self._function
    1005
    1006 - def _setIndex(self, value):
    1007 """ 1008 Property target used to set the action index. 1009 The value must be an integer >= 0. 1010 @raise ValueError: If the value is not valid. 1011 """ 1012 if value is None: 1013 self._index = None 1014 else: 1015 try: 1016 value = int(value) 1017 except TypeError: 1018 raise ValueError("Action index value must be an integer >= 0.") 1019 if value < 0: 1020 raise ValueError("Action index value must be an integer >= 0.") 1021 self._index = value
    1022
    1023 - def _getIndex(self):
    1024 """ 1025 Property target used to get the action index. 1026 """ 1027 return self._index
    1028
    1029 - def _setDependencies(self, value):
    1030 """ 1031 Property target used to set the action dependencies information. 1032 If not C{None}, the value must be a C{ActionDependecies} object. 1033 @raise ValueError: If the value is not a C{ActionDependencies} object. 1034 """ 1035 if value is None: 1036 self._dependencies = None 1037 else: 1038 if not isinstance(value, ActionDependencies): 1039 raise ValueError("Value must be a C{ActionDependencies} object.") 1040 self._dependencies = value
    1041
    1042 - def _getDependencies(self):
    1043 """ 1044 Property target used to get action dependencies information. 1045 """ 1046 return self._dependencies
    1047 1048 name = property(_getName, _setName, None, "Name of the extended action.") 1049 module = property(_getModule, _setModule, None, "Name of the module containing the extended action function.") 1050 function = property(_getFunction, _setFunction, None, "Name of the extended action function.") 1051 index = property(_getIndex, _setIndex, None, "Index of action, used for execution ordering.") 1052 dependencies = property(_getDependencies, _setDependencies, None, "Dependencies for action, used for execution ordering.")
    1053
    1054 1055 ######################################################################## 1056 # CommandOverride class definition 1057 ######################################################################## 1058 1059 -class CommandOverride(object):
    1060 1061 """ 1062 Class representing a piece of Cedar Backup command override configuration. 1063 1064 The following restrictions exist on data in this class: 1065 1066 - The absolute path must be absolute 1067 1068 @note: Lists within this class are "unordered" for equality comparisons. 1069 1070 @sort: __init__, __repr__, __str__, __cmp__, command, absolutePath 1071 """ 1072
    1073 - def __init__(self, command=None, absolutePath=None):
    1074 """ 1075 Constructor for the C{CommandOverride} class. 1076 1077 @param command: Name of command to be overridden. 1078 @param absolutePath: Absolute path of the overrridden command. 1079 1080 @raise ValueError: If one of the values is invalid. 1081 """ 1082 self._command = None 1083 self._absolutePath = None 1084 self.command = command 1085 self.absolutePath = absolutePath
    1086
    1087 - def __repr__(self):
    1088 """ 1089 Official string representation for class instance. 1090 """ 1091 return "CommandOverride(%s, %s)" % (self.command, self.absolutePath)
    1092
    1093 - def __str__(self):
    1094 """ 1095 Informal string representation for class instance. 1096 """ 1097 return self.__repr__()
    1098
    1099 - def __cmp__(self, other):
    1100 """ 1101 Definition of equals operator for this class. 1102 @param other: Other object to compare to. 1103 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1104 """ 1105 if other is None: 1106 return 1 1107 if self.command != other.command: 1108 if self.command < other.command: 1109 return -1 1110 else: 1111 return 1 1112 if self.absolutePath != other.absolutePath: 1113 if self.absolutePath < other.absolutePath: 1114 return -1 1115 else: 1116 return 1 1117 return 0
    1118
    1119 - def _setCommand(self, value):
    1120 """ 1121 Property target used to set the command. 1122 The value must be a non-empty string if it is not C{None}. 1123 @raise ValueError: If the value is an empty string. 1124 """ 1125 if value is not None: 1126 if len(value) < 1: 1127 raise ValueError("The command must be a non-empty string.") 1128 self._command = value
    1129
    1130 - def _getCommand(self):
    1131 """ 1132 Property target used to get the command. 1133 """ 1134 return self._command
    1135
    1136 - def _setAbsolutePath(self, value):
    1137 """ 1138 Property target used to set the absolute path. 1139 The value must be an absolute path if it is not C{None}. 1140 It does not have to exist on disk at the time of assignment. 1141 @raise ValueError: If the value is not an absolute path. 1142 @raise ValueError: If the value cannot be encoded properly. 1143 """ 1144 if value is not None: 1145 if not os.path.isabs(value): 1146 raise ValueError("Not an absolute path: [%s]" % value) 1147 self._absolutePath = encodePath(value)
    1148
    1149 - def _getAbsolutePath(self):
    1150 """ 1151 Property target used to get the absolute path. 1152 """ 1153 return self._absolutePath
    1154 1155 command = property(_getCommand, _setCommand, None, doc="Name of command to be overridden.") 1156 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the overrridden command.")
    1157
    1158 1159 ######################################################################## 1160 # CollectFile class definition 1161 ######################################################################## 1162 1163 -class CollectFile(object):
    1164 1165 """ 1166 Class representing a Cedar Backup collect file. 1167 1168 The following restrictions exist on data in this class: 1169 1170 - Absolute paths must be absolute 1171 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 1172 - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1173 1174 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, archiveMode 1175 """ 1176
    1177 - def __init__(self, absolutePath=None, collectMode=None, archiveMode=None):
    1178 """ 1179 Constructor for the C{CollectFile} class. 1180 1181 @param absolutePath: Absolute path of the file to collect. 1182 @param collectMode: Overridden collect mode for this file. 1183 @param archiveMode: Overridden archive mode for this file. 1184 1185 @raise ValueError: If one of the values is invalid. 1186 """ 1187 self._absolutePath = None 1188 self._collectMode = None 1189 self._archiveMode = None 1190 self.absolutePath = absolutePath 1191 self.collectMode = collectMode 1192 self.archiveMode = archiveMode
    1193
    1194 - def __repr__(self):
    1195 """ 1196 Official string representation for class instance. 1197 """ 1198 return "CollectFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.archiveMode)
    1199
    1200 - def __str__(self):
    1201 """ 1202 Informal string representation for class instance. 1203 """ 1204 return self.__repr__()
    1205
    1206 - def __cmp__(self, other):
    1207 """ 1208 Definition of equals operator for this class. 1209 @param other: Other object to compare to. 1210 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1211 """ 1212 if other is None: 1213 return 1 1214 if self.absolutePath != other.absolutePath: 1215 if self.absolutePath < other.absolutePath: 1216 return -1 1217 else: 1218 return 1 1219 if self.collectMode != other.collectMode: 1220 if self.collectMode < other.collectMode: 1221 return -1 1222 else: 1223 return 1 1224 if self.archiveMode != other.archiveMode: 1225 if self.archiveMode < other.archiveMode: 1226 return -1 1227 else: 1228 return 1 1229 return 0
    1230
    1231 - def _setAbsolutePath(self, value):
    1232 """ 1233 Property target used to set the absolute path. 1234 The value must be an absolute path if it is not C{None}. 1235 It does not have to exist on disk at the time of assignment. 1236 @raise ValueError: If the value is not an absolute path. 1237 @raise ValueError: If the value cannot be encoded properly. 1238 """ 1239 if value is not None: 1240 if not os.path.isabs(value): 1241 raise ValueError("Not an absolute path: [%s]" % value) 1242 self._absolutePath = encodePath(value)
    1243
    1244 - def _getAbsolutePath(self):
    1245 """ 1246 Property target used to get the absolute path. 1247 """ 1248 return self._absolutePath
    1249
    1250 - def _setCollectMode(self, value):
    1251 """ 1252 Property target used to set the collect mode. 1253 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 1254 @raise ValueError: If the value is not valid. 1255 """ 1256 if value is not None: 1257 if value not in VALID_COLLECT_MODES: 1258 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 1259 self._collectMode = value
    1260
    1261 - def _getCollectMode(self):
    1262 """ 1263 Property target used to get the collect mode. 1264 """ 1265 return self._collectMode
    1266
    1267 - def _setArchiveMode(self, value):
    1268 """ 1269 Property target used to set the archive mode. 1270 If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1271 @raise ValueError: If the value is not valid. 1272 """ 1273 if value is not None: 1274 if value not in VALID_ARCHIVE_MODES: 1275 raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) 1276 self._archiveMode = value
    1277
    1278 - def _getArchiveMode(self):
    1279 """ 1280 Property target used to get the archive mode. 1281 """ 1282 return self._archiveMode
    1283 1284 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the file to collect.") 1285 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this file.") 1286 archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this file.")
    1287
    1288 1289 ######################################################################## 1290 # CollectDir class definition 1291 ######################################################################## 1292 1293 -class CollectDir(object):
    1294 1295 """ 1296 Class representing a Cedar Backup collect directory. 1297 1298 The following restrictions exist on data in this class: 1299 1300 - Absolute paths must be absolute 1301 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 1302 - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1303 - The ignore file must be a non-empty string. 1304 1305 For the C{absoluteExcludePaths} list, validation is accomplished through the 1306 L{util.AbsolutePathList} list implementation that overrides common list 1307 methods and transparently does the absolute path validation for us. 1308 1309 @note: Lists within this class are "unordered" for equality comparisons. 1310 1311 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, 1312 archiveMode, ignoreFile, linkDepth, dereference, absoluteExcludePaths, 1313 relativeExcludePaths, excludePatterns 1314 """ 1315
    1316 - def __init__(self, absolutePath=None, collectMode=None, archiveMode=None, ignoreFile=None, 1317 absoluteExcludePaths=None, relativeExcludePaths=None, excludePatterns=None, 1318 linkDepth=None, dereference=False, recursionLevel=None):
    1319 """ 1320 Constructor for the C{CollectDir} class. 1321 1322 @param absolutePath: Absolute path of the directory to collect. 1323 @param collectMode: Overridden collect mode for this directory. 1324 @param archiveMode: Overridden archive mode for this directory. 1325 @param ignoreFile: Overidden ignore file name for this directory. 1326 @param linkDepth: Maximum at which soft links should be followed. 1327 @param dereference: Whether to dereference links that are followed. 1328 @param absoluteExcludePaths: List of absolute paths to exclude. 1329 @param relativeExcludePaths: List of relative paths to exclude. 1330 @param excludePatterns: List of regular expression patterns to exclude. 1331 1332 @raise ValueError: If one of the values is invalid. 1333 """ 1334 self._absolutePath = None 1335 self._collectMode = None 1336 self._archiveMode = None 1337 self._ignoreFile = None 1338 self._linkDepth = None 1339 self._dereference = None 1340 self._recursionLevel = None 1341 self._absoluteExcludePaths = None 1342 self._relativeExcludePaths = None 1343 self._excludePatterns = None 1344 self.absolutePath = absolutePath 1345 self.collectMode = collectMode 1346 self.archiveMode = archiveMode 1347 self.ignoreFile = ignoreFile 1348 self.linkDepth = linkDepth 1349 self.dereference = dereference 1350 self.recursionLevel = recursionLevel 1351 self.absoluteExcludePaths = absoluteExcludePaths 1352 self.relativeExcludePaths = relativeExcludePaths 1353 self.excludePatterns = excludePatterns
    1354
    1355 - def __repr__(self):
    1356 """ 1357 Official string representation for class instance. 1358 """ 1359 return "CollectDir(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, 1360 self.archiveMode, self.ignoreFile, 1361 self.absoluteExcludePaths, 1362 self.relativeExcludePaths, 1363 self.excludePatterns, 1364 self.linkDepth, self.dereference, 1365 self.recursionLevel)
    1366
    1367 - def __str__(self):
    1368 """ 1369 Informal string representation for class instance. 1370 """ 1371 return self.__repr__()
    1372
    1373 - def __cmp__(self, other):
    1374 """ 1375 Definition of equals operator for this class. 1376 Lists within this class are "unordered" for equality comparisons. 1377 @param other: Other object to compare to. 1378 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1379 """ 1380 if other is None: 1381 return 1 1382 if self.absolutePath != other.absolutePath: 1383 if self.absolutePath < other.absolutePath: 1384 return -1 1385 else: 1386 return 1 1387 if self.collectMode != other.collectMode: 1388 if self.collectMode < other.collectMode: 1389 return -1 1390 else: 1391 return 1 1392 if self.archiveMode != other.archiveMode: 1393 if self.archiveMode < other.archiveMode: 1394 return -1 1395 else: 1396 return 1 1397 if self.ignoreFile != other.ignoreFile: 1398 if self.ignoreFile < other.ignoreFile: 1399 return -1 1400 else: 1401 return 1 1402 if self.linkDepth != other.linkDepth: 1403 if self.linkDepth < other.linkDepth: 1404 return -1 1405 else: 1406 return 1 1407 if self.dereference != other.dereference: 1408 if self.dereference < other.dereference: 1409 return -1 1410 else: 1411 return 1 1412 if self.recursionLevel != other.recursionLevel: 1413 if self.recursionLevel < other.recursionLevel: 1414 return -1 1415 else: 1416 return 1 1417 if self.absoluteExcludePaths != other.absoluteExcludePaths: 1418 if self.absoluteExcludePaths < other.absoluteExcludePaths: 1419 return -1 1420 else: 1421 return 1 1422 if self.relativeExcludePaths != other.relativeExcludePaths: 1423 if self.relativeExcludePaths < other.relativeExcludePaths: 1424 return -1 1425 else: 1426 return 1 1427 if self.excludePatterns != other.excludePatterns: 1428 if self.excludePatterns < other.excludePatterns: 1429 return -1 1430 else: 1431 return 1 1432 return 0
    1433
    1434 - def _setAbsolutePath(self, value):
    1435 """ 1436 Property target used to set the absolute path. 1437 The value must be an absolute path if it is not C{None}. 1438 It does not have to exist on disk at the time of assignment. 1439 @raise ValueError: If the value is not an absolute path. 1440 @raise ValueError: If the value cannot be encoded properly. 1441 """ 1442 if value is not None: 1443 if not os.path.isabs(value): 1444 raise ValueError("Not an absolute path: [%s]" % value) 1445 self._absolutePath = encodePath(value)
    1446
    1447 - def _getAbsolutePath(self):
    1448 """ 1449 Property target used to get the absolute path. 1450 """ 1451 return self._absolutePath
    1452
    1453 - def _setCollectMode(self, value):
    1454 """ 1455 Property target used to set the collect mode. 1456 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 1457 @raise ValueError: If the value is not valid. 1458 """ 1459 if value is not None: 1460 if value not in VALID_COLLECT_MODES: 1461 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 1462 self._collectMode = value
    1463
    1464 - def _getCollectMode(self):
    1465 """ 1466 Property target used to get the collect mode. 1467 """ 1468 return self._collectMode
    1469
    1470 - def _setArchiveMode(self, value):
    1471 """ 1472 Property target used to set the archive mode. 1473 If not C{None}, the mode must be one of the values in L{VALID_ARCHIVE_MODES}. 1474 @raise ValueError: If the value is not valid. 1475 """ 1476 if value is not None: 1477 if value not in VALID_ARCHIVE_MODES: 1478 raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) 1479 self._archiveMode = value
    1480
    1481 - def _getArchiveMode(self):
    1482 """ 1483 Property target used to get the archive mode. 1484 """ 1485 return self._archiveMode
    1486
    1487 - def _setIgnoreFile(self, value):
    1488 """ 1489 Property target used to set the ignore file. 1490 The value must be a non-empty string if it is not C{None}. 1491 @raise ValueError: If the value is an empty string. 1492 """ 1493 if value is not None: 1494 if len(value) < 1: 1495 raise ValueError("The ignore file must be a non-empty string.") 1496 self._ignoreFile = value
    1497
    1498 - def _getIgnoreFile(self):
    1499 """ 1500 Property target used to get the ignore file. 1501 """ 1502 return self._ignoreFile
    1503
    1504 - def _setLinkDepth(self, value):
    1505 """ 1506 Property target used to set the link depth. 1507 The value must be an integer >= 0. 1508 @raise ValueError: If the value is not valid. 1509 """ 1510 if value is None: 1511 self._linkDepth = None 1512 else: 1513 try: 1514 value = int(value) 1515 except TypeError: 1516 raise ValueError("Link depth value must be an integer >= 0.") 1517 if value < 0: 1518 raise ValueError("Link depth value must be an integer >= 0.") 1519 self._linkDepth = value
    1520
    1521 - def _getLinkDepth(self):
    1522 """ 1523 Property target used to get the action linkDepth. 1524 """ 1525 return self._linkDepth
    1526
    1527 - def _setDereference(self, value):
    1528 """ 1529 Property target used to set the dereference flag. 1530 No validations, but we normalize the value to C{True} or C{False}. 1531 """ 1532 if value: 1533 self._dereference = True 1534 else: 1535 self._dereference = False
    1536
    1537 - def _getDereference(self):
    1538 """ 1539 Property target used to get the dereference flag. 1540 """ 1541 return self._dereference
    1542
    1543 - def _setRecursionLevel(self, value):
    1544 """ 1545 Property target used to set the recursionLevel. 1546 The value must be an integer. 1547 @raise ValueError: If the value is not valid. 1548 """ 1549 if value is None: 1550 self._recursionLevel = None 1551 else: 1552 try: 1553 value = int(value) 1554 except TypeError: 1555 raise ValueError("Recusion level value must be an integer.") 1556 self._recursionLevel = value
    1557
    1558 - def _getRecursionLevel(self):
    1559 """ 1560 Property target used to get the action recursionLevel. 1561 """ 1562 return self._recursionLevel
    1563
    1564 - def _setAbsoluteExcludePaths(self, value):
    1565 """ 1566 Property target used to set the absolute exclude paths list. 1567 Either the value must be C{None} or each element must be an absolute path. 1568 Elements do not have to exist on disk at the time of assignment. 1569 @raise ValueError: If the value is not an absolute path. 1570 """ 1571 if value is None: 1572 self._absoluteExcludePaths = None 1573 else: 1574 try: 1575 saved = self._absoluteExcludePaths 1576 self._absoluteExcludePaths = AbsolutePathList() 1577 self._absoluteExcludePaths.extend(value) 1578 except Exception, e: 1579 self._absoluteExcludePaths = saved 1580 raise e
    1581
    1582 - def _getAbsoluteExcludePaths(self):
    1583 """ 1584 Property target used to get the absolute exclude paths list. 1585 """ 1586 return self._absoluteExcludePaths
    1587
    1588 - def _setRelativeExcludePaths(self, value):
    1589 """ 1590 Property target used to set the relative exclude paths list. 1591 Elements do not have to exist on disk at the time of assignment. 1592 """ 1593 if value is None: 1594 self._relativeExcludePaths = None 1595 else: 1596 try: 1597 saved = self._relativeExcludePaths 1598 self._relativeExcludePaths = UnorderedList() 1599 self._relativeExcludePaths.extend(value) 1600 except Exception, e: 1601 self._relativeExcludePaths = saved 1602 raise e
    1603
    1604 - def _getRelativeExcludePaths(self):
    1605 """ 1606 Property target used to get the relative exclude paths list. 1607 """ 1608 return self._relativeExcludePaths
    1609
    1610 - def _setExcludePatterns(self, value):
    1611 """ 1612 Property target used to set the exclude patterns list. 1613 """ 1614 if value is None: 1615 self._excludePatterns = None 1616 else: 1617 try: 1618 saved = self._excludePatterns 1619 self._excludePatterns = RegexList() 1620 self._excludePatterns.extend(value) 1621 except Exception, e: 1622 self._excludePatterns = saved 1623 raise e
    1624
    1625 - def _getExcludePatterns(self):
    1626 """ 1627 Property target used to get the exclude patterns list. 1628 """ 1629 return self._excludePatterns
    1630 1631 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path of the directory to collect.") 1632 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this directory.") 1633 archiveMode = property(_getArchiveMode, _setArchiveMode, None, doc="Overridden archive mode for this directory.") 1634 ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, doc="Overridden ignore file name for this directory.") 1635 linkDepth = property(_getLinkDepth, _setLinkDepth, None, doc="Maximum at which soft links should be followed.") 1636 dereference = property(_getDereference, _setDereference, None, doc="Whether to dereference links that are followed.") 1637 recursionLevel = property(_getRecursionLevel, _setRecursionLevel, None, "Recursion level to use for recursive directory collection") 1638 absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") 1639 relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") 1640 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.")
    1641
    1642 1643 ######################################################################## 1644 # PurgeDir class definition 1645 ######################################################################## 1646 1647 -class PurgeDir(object):
    1648 1649 """ 1650 Class representing a Cedar Backup purge directory. 1651 1652 The following restrictions exist on data in this class: 1653 1654 - The absolute path must be an absolute path 1655 - The retain days value must be an integer >= 0. 1656 1657 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, retainDays 1658 """ 1659
    1660 - def __init__(self, absolutePath=None, retainDays=None):
    1661 """ 1662 Constructor for the C{PurgeDir} class. 1663 1664 @param absolutePath: Absolute path of the directory to be purged. 1665 @param retainDays: Number of days content within directory should be retained. 1666 1667 @raise ValueError: If one of the values is invalid. 1668 """ 1669 self._absolutePath = None 1670 self._retainDays = None 1671 self.absolutePath = absolutePath 1672 self.retainDays = retainDays
    1673
    1674 - def __repr__(self):
    1675 """ 1676 Official string representation for class instance. 1677 """ 1678 return "PurgeDir(%s, %s)" % (self.absolutePath, self.retainDays)
    1679
    1680 - def __str__(self):
    1681 """ 1682 Informal string representation for class instance. 1683 """ 1684 return self.__repr__()
    1685
    1686 - def __cmp__(self, other):
    1687 """ 1688 Definition of equals operator for this class. 1689 @param other: Other object to compare to. 1690 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1691 """ 1692 if other is None: 1693 return 1 1694 if self.absolutePath != other.absolutePath: 1695 if self.absolutePath < other.absolutePath: 1696 return -1 1697 else: 1698 return 1 1699 if self.retainDays != other.retainDays: 1700 if self.retainDays < other.retainDays: 1701 return -1 1702 else: 1703 return 1 1704 return 0
    1705
    1706 - def _setAbsolutePath(self, value):
    1707 """ 1708 Property target used to set the absolute path. 1709 The value must be an absolute path if it is not C{None}. 1710 It does not have to exist on disk at the time of assignment. 1711 @raise ValueError: If the value is not an absolute path. 1712 @raise ValueError: If the value cannot be encoded properly. 1713 """ 1714 if value is not None: 1715 if not os.path.isabs(value): 1716 raise ValueError("Absolute path must, er, be an absolute path.") 1717 self._absolutePath = encodePath(value)
    1718
    1719 - def _getAbsolutePath(self):
    1720 """ 1721 Property target used to get the absolute path. 1722 """ 1723 return self._absolutePath
    1724
    1725 - def _setRetainDays(self, value):
    1726 """ 1727 Property target used to set the retain days value. 1728 The value must be an integer >= 0. 1729 @raise ValueError: If the value is not valid. 1730 """ 1731 if value is None: 1732 self._retainDays = None 1733 else: 1734 try: 1735 value = int(value) 1736 except TypeError: 1737 raise ValueError("Retain days value must be an integer >= 0.") 1738 if value < 0: 1739 raise ValueError("Retain days value must be an integer >= 0.") 1740 self._retainDays = value
    1741
    1742 - def _getRetainDays(self):
    1743 """ 1744 Property target used to get the absolute path. 1745 """ 1746 return self._retainDays
    1747 1748 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, "Absolute path of directory to purge.") 1749 retainDays = property(_getRetainDays, _setRetainDays, None, "Number of days content within directory should be retained.")
    1750
    1751 1752 ######################################################################## 1753 # LocalPeer class definition 1754 ######################################################################## 1755 1756 -class LocalPeer(object):
    1757 1758 """ 1759 Class representing a Cedar Backup peer. 1760 1761 The following restrictions exist on data in this class: 1762 1763 - The peer name must be a non-empty string. 1764 - The collect directory must be an absolute path. 1765 - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. 1766 1767 @sort: __init__, __repr__, __str__, __cmp__, name, collectDir 1768 """ 1769
    1770 - def __init__(self, name=None, collectDir=None, ignoreFailureMode=None):
    1771 """ 1772 Constructor for the C{LocalPeer} class. 1773 1774 @param name: Name of the peer, typically a valid hostname. 1775 @param collectDir: Collect directory to stage files from on peer. 1776 @param ignoreFailureMode: Ignore failure mode for peer. 1777 1778 @raise ValueError: If one of the values is invalid. 1779 """ 1780 self._name = None 1781 self._collectDir = None 1782 self._ignoreFailureMode = None 1783 self.name = name 1784 self.collectDir = collectDir 1785 self.ignoreFailureMode = ignoreFailureMode
    1786
    1787 - def __repr__(self):
    1788 """ 1789 Official string representation for class instance. 1790 """ 1791 return "LocalPeer(%s, %s, %s)" % (self.name, self.collectDir, self.ignoreFailureMode)
    1792
    1793 - def __str__(self):
    1794 """ 1795 Informal string representation for class instance. 1796 """ 1797 return self.__repr__()
    1798
    1799 - def __cmp__(self, other):
    1800 """ 1801 Definition of equals operator for this class. 1802 @param other: Other object to compare to. 1803 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1804 """ 1805 if other is None: 1806 return 1 1807 if self.name != other.name: 1808 if self.name < other.name: 1809 return -1 1810 else: 1811 return 1 1812 if self.collectDir != other.collectDir: 1813 if self.collectDir < other.collectDir: 1814 return -1 1815 else: 1816 return 1 1817 if self.ignoreFailureMode != other.ignoreFailureMode: 1818 if self.ignoreFailureMode < other.ignoreFailureMode: 1819 return -1 1820 else: 1821 return 1 1822 return 0
    1823
    1824 - def _setName(self, value):
    1825 """ 1826 Property target used to set the peer name. 1827 The value must be a non-empty string if it is not C{None}. 1828 @raise ValueError: If the value is an empty string. 1829 """ 1830 if value is not None: 1831 if len(value) < 1: 1832 raise ValueError("The peer name must be a non-empty string.") 1833 self._name = value
    1834
    1835 - def _getName(self):
    1836 """ 1837 Property target used to get the peer name. 1838 """ 1839 return self._name
    1840
    1841 - def _setCollectDir(self, value):
    1842 """ 1843 Property target used to set the collect directory. 1844 The value must be an absolute path if it is not C{None}. 1845 It does not have to exist on disk at the time of assignment. 1846 @raise ValueError: If the value is not an absolute path. 1847 @raise ValueError: If the value cannot be encoded properly. 1848 """ 1849 if value is not None: 1850 if not os.path.isabs(value): 1851 raise ValueError("Collect directory must be an absolute path.") 1852 self._collectDir = encodePath(value)
    1853
    1854 - def _getCollectDir(self):
    1855 """ 1856 Property target used to get the collect directory. 1857 """ 1858 return self._collectDir
    1859
    1860 - def _setIgnoreFailureMode(self, value):
    1861 """ 1862 Property target used to set the ignoreFailure mode. 1863 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 1864 @raise ValueError: If the value is not valid. 1865 """ 1866 if value is not None: 1867 if value not in VALID_FAILURE_MODES: 1868 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 1869 self._ignoreFailureMode = value
    1870
    1871 - def _getIgnoreFailureMode(self):
    1872 """ 1873 Property target used to get the ignoreFailure mode. 1874 """ 1875 return self._ignoreFailureMode
    1876 1877 name = property(_getName, _setName, None, "Name of the peer, typically a valid hostname.") 1878 collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") 1879 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.")
    1880
    1881 1882 ######################################################################## 1883 # RemotePeer class definition 1884 ######################################################################## 1885 1886 -class RemotePeer(object):
    1887 1888 """ 1889 Class representing a Cedar Backup peer. 1890 1891 The following restrictions exist on data in this class: 1892 1893 - The peer name must be a non-empty string. 1894 - The collect directory must be an absolute path. 1895 - The remote user must be a non-empty string. 1896 - The rcp command must be a non-empty string. 1897 - The rsh command must be a non-empty string. 1898 - The cback command must be a non-empty string. 1899 - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} 1900 - The ignore failure mode must be one of the values in L{VALID_FAILURE_MODES}. 1901 1902 @sort: __init__, __repr__, __str__, __cmp__, name, collectDir, remoteUser, rcpCommand 1903 """ 1904
    1905 - def __init__(self, name=None, collectDir=None, remoteUser=None, 1906 rcpCommand=None, rshCommand=None, cbackCommand=None, 1907 managed=False, managedActions=None, ignoreFailureMode=None):
    1908 """ 1909 Constructor for the C{RemotePeer} class. 1910 1911 @param name: Name of the peer, must be a valid hostname. 1912 @param collectDir: Collect directory to stage files from on peer. 1913 @param remoteUser: Name of backup user on remote peer. 1914 @param rcpCommand: Overridden rcp-compatible copy command for peer. 1915 @param rshCommand: Overridden rsh-compatible remote shell command for peer. 1916 @param cbackCommand: Overridden cback-compatible command to use on remote peer. 1917 @param managed: Indicates whether this is a managed peer. 1918 @param managedActions: Overridden set of actions that are managed on the peer. 1919 @param ignoreFailureMode: Ignore failure mode for peer. 1920 1921 @raise ValueError: If one of the values is invalid. 1922 """ 1923 self._name = None 1924 self._collectDir = None 1925 self._remoteUser = None 1926 self._rcpCommand = None 1927 self._rshCommand = None 1928 self._cbackCommand = None 1929 self._managed = None 1930 self._managedActions = None 1931 self._ignoreFailureMode = None 1932 self.name = name 1933 self.collectDir = collectDir 1934 self.remoteUser = remoteUser 1935 self.rcpCommand = rcpCommand 1936 self.rshCommand = rshCommand 1937 self.cbackCommand = cbackCommand 1938 self.managed = managed 1939 self.managedActions = managedActions 1940 self.ignoreFailureMode = ignoreFailureMode
    1941
    1942 - def __repr__(self):
    1943 """ 1944 Official string representation for class instance. 1945 """ 1946 return "RemotePeer(%s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.name, self.collectDir, self.remoteUser, 1947 self.rcpCommand, self.rshCommand, self.cbackCommand, 1948 self.managed, self.managedActions, self.ignoreFailureMode)
    1949
    1950 - def __str__(self):
    1951 """ 1952 Informal string representation for class instance. 1953 """ 1954 return self.__repr__()
    1955
    1956 - def __cmp__(self, other):
    1957 """ 1958 Definition of equals operator for this class. 1959 @param other: Other object to compare to. 1960 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1961 """ 1962 if other is None: 1963 return 1 1964 if self.name != other.name: 1965 if self.name < other.name: 1966 return -1 1967 else: 1968 return 1 1969 if self.collectDir != other.collectDir: 1970 if self.collectDir < other.collectDir: 1971 return -1 1972 else: 1973 return 1 1974 if self.remoteUser != other.remoteUser: 1975 if self.remoteUser < other.remoteUser: 1976 return -1 1977 else: 1978 return 1 1979 if self.rcpCommand != other.rcpCommand: 1980 if self.rcpCommand < other.rcpCommand: 1981 return -1 1982 else: 1983 return 1 1984 if self.rshCommand != other.rshCommand: 1985 if self.rshCommand < other.rshCommand: 1986 return -1 1987 else: 1988 return 1 1989 if self.cbackCommand != other.cbackCommand: 1990 if self.cbackCommand < other.cbackCommand: 1991 return -1 1992 else: 1993 return 1 1994 if self.managed != other.managed: 1995 if self.managed < other.managed: 1996 return -1 1997 else: 1998 return 1 1999 if self.managedActions != other.managedActions: 2000 if self.managedActions < other.managedActions: 2001 return -1 2002 else: 2003 return 1 2004 if self.ignoreFailureMode != other.ignoreFailureMode: 2005 if self.ignoreFailureMode < other.ignoreFailureMode: 2006 return -1 2007 else: 2008 return 1 2009 return 0
    2010
    2011 - def _setName(self, value):
    2012 """ 2013 Property target used to set the peer name. 2014 The value must be a non-empty string if it is not C{None}. 2015 @raise ValueError: If the value is an empty string. 2016 """ 2017 if value is not None: 2018 if len(value) < 1: 2019 raise ValueError("The peer name must be a non-empty string.") 2020 self._name = value
    2021
    2022 - def _getName(self):
    2023 """ 2024 Property target used to get the peer name. 2025 """ 2026 return self._name
    2027
    2028 - def _setCollectDir(self, value):
    2029 """ 2030 Property target used to set the collect directory. 2031 The value must be an absolute path if it is not C{None}. 2032 It does not have to exist on disk at the time of assignment. 2033 @raise ValueError: If the value is not an absolute path. 2034 @raise ValueError: If the value cannot be encoded properly. 2035 """ 2036 if value is not None: 2037 if not os.path.isabs(value): 2038 raise ValueError("Collect directory must be an absolute path.") 2039 self._collectDir = encodePath(value)
    2040
    2041 - def _getCollectDir(self):
    2042 """ 2043 Property target used to get the collect directory. 2044 """ 2045 return self._collectDir
    2046
    2047 - def _setRemoteUser(self, value):
    2048 """ 2049 Property target used to set the remote user. 2050 The value must be a non-empty string if it is not C{None}. 2051 @raise ValueError: If the value is an empty string. 2052 """ 2053 if value is not None: 2054 if len(value) < 1: 2055 raise ValueError("The remote user must be a non-empty string.") 2056 self._remoteUser = value
    2057
    2058 - def _getRemoteUser(self):
    2059 """ 2060 Property target used to get the remote user. 2061 """ 2062 return self._remoteUser
    2063
    2064 - def _setRcpCommand(self, value):
    2065 """ 2066 Property target used to set the rcp command. 2067 The value must be a non-empty string if it is not C{None}. 2068 @raise ValueError: If the value is an empty string. 2069 """ 2070 if value is not None: 2071 if len(value) < 1: 2072 raise ValueError("The rcp command must be a non-empty string.") 2073 self._rcpCommand = value
    2074
    2075 - def _getRcpCommand(self):
    2076 """ 2077 Property target used to get the rcp command. 2078 """ 2079 return self._rcpCommand
    2080
    2081 - def _setRshCommand(self, value):
    2082 """ 2083 Property target used to set the rsh command. 2084 The value must be a non-empty string if it is not C{None}. 2085 @raise ValueError: If the value is an empty string. 2086 """ 2087 if value is not None: 2088 if len(value) < 1: 2089 raise ValueError("The rsh command must be a non-empty string.") 2090 self._rshCommand = value
    2091
    2092 - def _getRshCommand(self):
    2093 """ 2094 Property target used to get the rsh command. 2095 """ 2096 return self._rshCommand
    2097
    2098 - def _setCbackCommand(self, value):
    2099 """ 2100 Property target used to set the cback command. 2101 The value must be a non-empty string if it is not C{None}. 2102 @raise ValueError: If the value is an empty string. 2103 """ 2104 if value is not None: 2105 if len(value) < 1: 2106 raise ValueError("The cback command must be a non-empty string.") 2107 self._cbackCommand = value
    2108
    2109 - def _getCbackCommand(self):
    2110 """ 2111 Property target used to get the cback command. 2112 """ 2113 return self._cbackCommand
    2114
    2115 - def _setManaged(self, value):
    2116 """ 2117 Property target used to set the managed flag. 2118 No validations, but we normalize the value to C{True} or C{False}. 2119 """ 2120 if value: 2121 self._managed = True 2122 else: 2123 self._managed = False
    2124
    2125 - def _getManaged(self):
    2126 """ 2127 Property target used to get the managed flag. 2128 """ 2129 return self._managed
    2130
    2131 - def _setManagedActions(self, value):
    2132 """ 2133 Property target used to set the managed actions list. 2134 Elements do not have to exist on disk at the time of assignment. 2135 """ 2136 if value is None: 2137 self._managedActions = None 2138 else: 2139 try: 2140 saved = self._managedActions 2141 self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 2142 self._managedActions.extend(value) 2143 except Exception, e: 2144 self._managedActions = saved 2145 raise e
    2146
    2147 - def _getManagedActions(self):
    2148 """ 2149 Property target used to get the managed actions list. 2150 """ 2151 return self._managedActions
    2152
    2153 - def _setIgnoreFailureMode(self, value):
    2154 """ 2155 Property target used to set the ignoreFailure mode. 2156 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 2157 @raise ValueError: If the value is not valid. 2158 """ 2159 if value is not None: 2160 if value not in VALID_FAILURE_MODES: 2161 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 2162 self._ignoreFailureMode = value
    2163
    2164 - def _getIgnoreFailureMode(self):
    2165 """ 2166 Property target used to get the ignoreFailure mode. 2167 """ 2168 return self._ignoreFailureMode
    2169 2170 name = property(_getName, _setName, None, "Name of the peer, must be a valid hostname.") 2171 collectDir = property(_getCollectDir, _setCollectDir, None, "Collect directory to stage files from on peer.") 2172 remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of backup user on remote peer.") 2173 rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Overridden rcp-compatible copy command for peer.") 2174 rshCommand = property(_getRshCommand, _setRshCommand, None, "Overridden rsh-compatible remote shell command for peer.") 2175 cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Overridden cback-compatible command to use on remote peer.") 2176 managed = property(_getManaged, _setManaged, None, "Indicates whether this is a managed peer.") 2177 managedActions = property(_getManagedActions, _setManagedActions, None, "Overridden set of actions that are managed on the peer.") 2178 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.")
    2179
    2180 2181 ######################################################################## 2182 # ReferenceConfig class definition 2183 ######################################################################## 2184 2185 -class ReferenceConfig(object):
    2186 2187 """ 2188 Class representing a Cedar Backup reference configuration. 2189 2190 The reference information is just used for saving off metadata about 2191 configuration and exists mostly for backwards-compatibility with Cedar 2192 Backup 1.x. 2193 2194 @sort: __init__, __repr__, __str__, __cmp__, author, revision, description, generator 2195 """ 2196
    2197 - def __init__(self, author=None, revision=None, description=None, generator=None):
    2198 """ 2199 Constructor for the C{ReferenceConfig} class. 2200 2201 @param author: Author of the configuration file. 2202 @param revision: Revision of the configuration file. 2203 @param description: Description of the configuration file. 2204 @param generator: Tool that generated the configuration file. 2205 """ 2206 self._author = None 2207 self._revision = None 2208 self._description = None 2209 self._generator = None 2210 self.author = author 2211 self.revision = revision 2212 self.description = description 2213 self.generator = generator
    2214
    2215 - def __repr__(self):
    2216 """ 2217 Official string representation for class instance. 2218 """ 2219 return "ReferenceConfig(%s, %s, %s, %s)" % (self.author, self.revision, self.description, self.generator)
    2220
    2221 - def __str__(self):
    2222 """ 2223 Informal string representation for class instance. 2224 """ 2225 return self.__repr__()
    2226
    2227 - def __cmp__(self, other):
    2228 """ 2229 Definition of equals operator for this class. 2230 @param other: Other object to compare to. 2231 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2232 """ 2233 if other is None: 2234 return 1 2235 if self.author != other.author: 2236 if self.author < other.author: 2237 return -1 2238 else: 2239 return 1 2240 if self.revision != other.revision: 2241 if self.revision < other.revision: 2242 return -1 2243 else: 2244 return 1 2245 if self.description != other.description: 2246 if self.description < other.description: 2247 return -1 2248 else: 2249 return 1 2250 if self.generator != other.generator: 2251 if self.generator < other.generator: 2252 return -1 2253 else: 2254 return 1 2255 return 0
    2256
    2257 - def _setAuthor(self, value):
    2258 """ 2259 Property target used to set the author value. 2260 No validations. 2261 """ 2262 self._author = value
    2263
    2264 - def _getAuthor(self):
    2265 """ 2266 Property target used to get the author value. 2267 """ 2268 return self._author
    2269
    2270 - def _setRevision(self, value):
    2271 """ 2272 Property target used to set the revision value. 2273 No validations. 2274 """ 2275 self._revision = value
    2276
    2277 - def _getRevision(self):
    2278 """ 2279 Property target used to get the revision value. 2280 """ 2281 return self._revision
    2282
    2283 - def _setDescription(self, value):
    2284 """ 2285 Property target used to set the description value. 2286 No validations. 2287 """ 2288 self._description = value
    2289
    2290 - def _getDescription(self):
    2291 """ 2292 Property target used to get the description value. 2293 """ 2294 return self._description
    2295
    2296 - def _setGenerator(self, value):
    2297 """ 2298 Property target used to set the generator value. 2299 No validations. 2300 """ 2301 self._generator = value
    2302
    2303 - def _getGenerator(self):
    2304 """ 2305 Property target used to get the generator value. 2306 """ 2307 return self._generator
    2308 2309 author = property(_getAuthor, _setAuthor, None, "Author of the configuration file.") 2310 revision = property(_getRevision, _setRevision, None, "Revision of the configuration file.") 2311 description = property(_getDescription, _setDescription, None, "Description of the configuration file.") 2312 generator = property(_getGenerator, _setGenerator, None, "Tool that generated the configuration file.")
    2313
    2314 2315 ######################################################################## 2316 # ExtensionsConfig class definition 2317 ######################################################################## 2318 2319 -class ExtensionsConfig(object):
    2320 2321 """ 2322 Class representing Cedar Backup extensions configuration. 2323 2324 Extensions configuration is used to specify "extended actions" implemented 2325 by code external to Cedar Backup. For instance, a hypothetical third party 2326 might write extension code to collect database repository data. If they 2327 write a properly-formatted extension function, they can use the extension 2328 configuration to map a command-line Cedar Backup action (i.e. "database") 2329 to their function. 2330 2331 The following restrictions exist on data in this class: 2332 2333 - If set, the order mode must be one of the values in C{VALID_ORDER_MODES} 2334 - The actions list must be a list of C{ExtendedAction} objects. 2335 2336 @sort: __init__, __repr__, __str__, __cmp__, orderMode, actions 2337 """ 2338
    2339 - def __init__(self, actions=None, orderMode=None):
    2340 """ 2341 Constructor for the C{ExtensionsConfig} class. 2342 @param actions: List of extended actions 2343 """ 2344 self._orderMode = None 2345 self._actions = None 2346 self.orderMode = orderMode 2347 self.actions = actions
    2348
    2349 - def __repr__(self):
    2350 """ 2351 Official string representation for class instance. 2352 """ 2353 return "ExtensionsConfig(%s, %s)" % (self.orderMode, self.actions)
    2354
    2355 - def __str__(self):
    2356 """ 2357 Informal string representation for class instance. 2358 """ 2359 return self.__repr__()
    2360
    2361 - def __cmp__(self, other):
    2362 """ 2363 Definition of equals operator for this class. 2364 @param other: Other object to compare to. 2365 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2366 """ 2367 if other is None: 2368 return 1 2369 if self.orderMode != other.orderMode: 2370 if self.orderMode < other.orderMode: 2371 return -1 2372 else: 2373 return 1 2374 if self.actions != other.actions: 2375 if self.actions < other.actions: 2376 return -1 2377 else: 2378 return 1 2379 return 0
    2380
    2381 - def _setOrderMode(self, value):
    2382 """ 2383 Property target used to set the order mode. 2384 The value must be one of L{VALID_ORDER_MODES}. 2385 @raise ValueError: If the value is not valid. 2386 """ 2387 if value is not None: 2388 if value not in VALID_ORDER_MODES: 2389 raise ValueError("Order mode must be one of %s." % VALID_ORDER_MODES) 2390 self._orderMode = value
    2391
    2392 - def _getOrderMode(self):
    2393 """ 2394 Property target used to get the order mode. 2395 """ 2396 return self._orderMode
    2397
    2398 - def _setActions(self, value):
    2399 """ 2400 Property target used to set the actions list. 2401 Either the value must be C{None} or each element must be an C{ExtendedAction}. 2402 @raise ValueError: If the value is not a C{ExtendedAction} 2403 """ 2404 if value is None: 2405 self._actions = None 2406 else: 2407 try: 2408 saved = self._actions 2409 self._actions = ObjectTypeList(ExtendedAction, "ExtendedAction") 2410 self._actions.extend(value) 2411 except Exception, e: 2412 self._actions = saved 2413 raise e
    2414
    2415 - def _getActions(self):
    2416 """ 2417 Property target used to get the actions list. 2418 """ 2419 return self._actions
    2420 2421 orderMode = property(_getOrderMode, _setOrderMode, None, "Order mode for extensions, to control execution ordering.") 2422 actions = property(_getActions, _setActions, None, "List of extended actions.")
    2423
    2424 2425 ######################################################################## 2426 # OptionsConfig class definition 2427 ######################################################################## 2428 2429 -class OptionsConfig(object):
    2430 2431 """ 2432 Class representing a Cedar Backup global options configuration. 2433 2434 The options section is used to store global configuration options and 2435 defaults that can be applied to other sections. 2436 2437 The following restrictions exist on data in this class: 2438 2439 - The working directory must be an absolute path. 2440 - The starting day must be a day of the week in English, i.e. C{"monday"}, C{"tuesday"}, etc. 2441 - All of the other values must be non-empty strings if they are set to something other than C{None}. 2442 - The overrides list must be a list of C{CommandOverride} objects. 2443 - The hooks list must be a list of C{ActionHook} objects. 2444 - The cback command must be a non-empty string. 2445 - Any managed action name must be a non-empty string matching C{ACTION_NAME_REGEX} 2446 2447 @sort: __init__, __repr__, __str__, __cmp__, startingDay, workingDir, 2448 backupUser, backupGroup, rcpCommand, rshCommand, overrides 2449 """ 2450
    2451 - def __init__(self, startingDay=None, workingDir=None, backupUser=None, 2452 backupGroup=None, rcpCommand=None, overrides=None, 2453 hooks=None, rshCommand=None, cbackCommand=None, 2454 managedActions=None):
    2455 """ 2456 Constructor for the C{OptionsConfig} class. 2457 2458 @param startingDay: Day that starts the week. 2459 @param workingDir: Working (temporary) directory to use for backups. 2460 @param backupUser: Effective user that backups should run as. 2461 @param backupGroup: Effective group that backups should run as. 2462 @param rcpCommand: Default rcp-compatible copy command for staging. 2463 @param rshCommand: Default rsh-compatible command to use for remote shells. 2464 @param cbackCommand: Default cback-compatible command to use on managed remote peers. 2465 @param overrides: List of configured command path overrides, if any. 2466 @param hooks: List of configured pre- and post-action hooks. 2467 @param managedActions: Default set of actions that are managed on remote peers. 2468 2469 @raise ValueError: If one of the values is invalid. 2470 """ 2471 self._startingDay = None 2472 self._workingDir = None 2473 self._backupUser = None 2474 self._backupGroup = None 2475 self._rcpCommand = None 2476 self._rshCommand = None 2477 self._cbackCommand = None 2478 self._overrides = None 2479 self._hooks = None 2480 self._managedActions = None 2481 self.startingDay = startingDay 2482 self.workingDir = workingDir 2483 self.backupUser = backupUser 2484 self.backupGroup = backupGroup 2485 self.rcpCommand = rcpCommand 2486 self.rshCommand = rshCommand 2487 self.cbackCommand = cbackCommand 2488 self.overrides = overrides 2489 self.hooks = hooks 2490 self.managedActions = managedActions
    2491
    2492 - def __repr__(self):
    2493 """ 2494 Official string representation for class instance. 2495 """ 2496 return "OptionsConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % (self.startingDay, self.workingDir, 2497 self.backupUser, self.backupGroup, 2498 self.rcpCommand, self.overrides, 2499 self.hooks, self.rshCommand, 2500 self.cbackCommand, self.managedActions)
    2501
    2502 - def __str__(self):
    2503 """ 2504 Informal string representation for class instance. 2505 """ 2506 return self.__repr__()
    2507
    2508 - def __cmp__(self, other):
    2509 """ 2510 Definition of equals operator for this class. 2511 @param other: Other object to compare to. 2512 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2513 """ 2514 if other is None: 2515 return 1 2516 if self.startingDay != other.startingDay: 2517 if self.startingDay < other.startingDay: 2518 return -1 2519 else: 2520 return 1 2521 if self.workingDir != other.workingDir: 2522 if self.workingDir < other.workingDir: 2523 return -1 2524 else: 2525 return 1 2526 if self.backupUser != other.backupUser: 2527 if self.backupUser < other.backupUser: 2528 return -1 2529 else: 2530 return 1 2531 if self.backupGroup != other.backupGroup: 2532 if self.backupGroup < other.backupGroup: 2533 return -1 2534 else: 2535 return 1 2536 if self.rcpCommand != other.rcpCommand: 2537 if self.rcpCommand < other.rcpCommand: 2538 return -1 2539 else: 2540 return 1 2541 if self.rshCommand != other.rshCommand: 2542 if self.rshCommand < other.rshCommand: 2543 return -1 2544 else: 2545 return 1 2546 if self.cbackCommand != other.cbackCommand: 2547 if self.cbackCommand < other.cbackCommand: 2548 return -1 2549 else: 2550 return 1 2551 if self.overrides != other.overrides: 2552 if self.overrides < other.overrides: 2553 return -1 2554 else: 2555 return 1 2556 if self.hooks != other.hooks: 2557 if self.hooks < other.hooks: 2558 return -1 2559 else: 2560 return 1 2561 if self.managedActions != other.managedActions: 2562 if self.managedActions < other.managedActions: 2563 return -1 2564 else: 2565 return 1 2566 return 0
    2567
    2568 - def addOverride(self, command, absolutePath):
    2569 """ 2570 If no override currently exists for the command, add one. 2571 @param command: Name of command to be overridden. 2572 @param absolutePath: Absolute path of the overrridden command. 2573 """ 2574 override = CommandOverride(command, absolutePath) 2575 if self.overrides is None: 2576 self.overrides = [ override, ] 2577 else: 2578 exists = False 2579 for obj in self.overrides: 2580 if obj.command == override.command: 2581 exists = True 2582 break 2583 if not exists: 2584 self.overrides.append(override)
    2585
    2586 - def replaceOverride(self, command, absolutePath):
    2587 """ 2588 If override currently exists for the command, replace it; otherwise add it. 2589 @param command: Name of command to be overridden. 2590 @param absolutePath: Absolute path of the overrridden command. 2591 """ 2592 override = CommandOverride(command, absolutePath) 2593 if self.overrides is None: 2594 self.overrides = [ override, ] 2595 else: 2596 exists = False 2597 for obj in self.overrides: 2598 if obj.command == override.command: 2599 exists = True 2600 obj.absolutePath = override.absolutePath 2601 break 2602 if not exists: 2603 self.overrides.append(override)
    2604
    2605 - def _setStartingDay(self, value):
    2606 """ 2607 Property target used to set the starting day. 2608 If it is not C{None}, the value must be a valid English day of the week, 2609 one of C{"monday"}, C{"tuesday"}, C{"wednesday"}, etc. 2610 @raise ValueError: If the value is not a valid day of the week. 2611 """ 2612 if value is not None: 2613 if value not in ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday", ]: 2614 raise ValueError("Starting day must be an English day of the week, i.e. \"monday\".") 2615 self._startingDay = value
    2616
    2617 - def _getStartingDay(self):
    2618 """ 2619 Property target used to get the starting day. 2620 """ 2621 return self._startingDay
    2622
    2623 - def _setWorkingDir(self, value):
    2624 """ 2625 Property target used to set the working directory. 2626 The value must be an absolute path if it is not C{None}. 2627 It does not have to exist on disk at the time of assignment. 2628 @raise ValueError: If the value is not an absolute path. 2629 @raise ValueError: If the value cannot be encoded properly. 2630 """ 2631 if value is not None: 2632 if not os.path.isabs(value): 2633 raise ValueError("Working directory must be an absolute path.") 2634 self._workingDir = encodePath(value)
    2635
    2636 - def _getWorkingDir(self):
    2637 """ 2638 Property target used to get the working directory. 2639 """ 2640 return self._workingDir
    2641
    2642 - def _setBackupUser(self, value):
    2643 """ 2644 Property target used to set the backup user. 2645 The value must be a non-empty string if it is not C{None}. 2646 @raise ValueError: If the value is an empty string. 2647 """ 2648 if value is not None: 2649 if len(value) < 1: 2650 raise ValueError("Backup user must be a non-empty string.") 2651 self._backupUser = value
    2652
    2653 - def _getBackupUser(self):
    2654 """ 2655 Property target used to get the backup user. 2656 """ 2657 return self._backupUser
    2658
    2659 - def _setBackupGroup(self, value):
    2660 """ 2661 Property target used to set the backup group. 2662 The value must be a non-empty string if it is not C{None}. 2663 @raise ValueError: If the value is an empty string. 2664 """ 2665 if value is not None: 2666 if len(value) < 1: 2667 raise ValueError("Backup group must be a non-empty string.") 2668 self._backupGroup = value
    2669
    2670 - def _getBackupGroup(self):
    2671 """ 2672 Property target used to get the backup group. 2673 """ 2674 return self._backupGroup
    2675
    2676 - def _setRcpCommand(self, value):
    2677 """ 2678 Property target used to set the rcp command. 2679 The value must be a non-empty string if it is not C{None}. 2680 @raise ValueError: If the value is an empty string. 2681 """ 2682 if value is not None: 2683 if len(value) < 1: 2684 raise ValueError("The rcp command must be a non-empty string.") 2685 self._rcpCommand = value
    2686
    2687 - def _getRcpCommand(self):
    2688 """ 2689 Property target used to get the rcp command. 2690 """ 2691 return self._rcpCommand
    2692
    2693 - def _setRshCommand(self, value):
    2694 """ 2695 Property target used to set the rsh command. 2696 The value must be a non-empty string if it is not C{None}. 2697 @raise ValueError: If the value is an empty string. 2698 """ 2699 if value is not None: 2700 if len(value) < 1: 2701 raise ValueError("The rsh command must be a non-empty string.") 2702 self._rshCommand = value
    2703
    2704 - def _getRshCommand(self):
    2705 """ 2706 Property target used to get the rsh command. 2707 """ 2708 return self._rshCommand
    2709
    2710 - def _setCbackCommand(self, value):
    2711 """ 2712 Property target used to set the cback command. 2713 The value must be a non-empty string if it is not C{None}. 2714 @raise ValueError: If the value is an empty string. 2715 """ 2716 if value is not None: 2717 if len(value) < 1: 2718 raise ValueError("The cback command must be a non-empty string.") 2719 self._cbackCommand = value
    2720
    2721 - def _getCbackCommand(self):
    2722 """ 2723 Property target used to get the cback command. 2724 """ 2725 return self._cbackCommand
    2726
    2727 - def _setOverrides(self, value):
    2728 """ 2729 Property target used to set the command path overrides list. 2730 Either the value must be C{None} or each element must be a C{CommandOverride}. 2731 @raise ValueError: If the value is not a C{CommandOverride} 2732 """ 2733 if value is None: 2734 self._overrides = None 2735 else: 2736 try: 2737 saved = self._overrides 2738 self._overrides = ObjectTypeList(CommandOverride, "CommandOverride") 2739 self._overrides.extend(value) 2740 except Exception, e: 2741 self._overrides = saved 2742 raise e
    2743
    2744 - def _getOverrides(self):
    2745 """ 2746 Property target used to get the command path overrides list. 2747 """ 2748 return self._overrides
    2749
    2750 - def _setHooks(self, value):
    2751 """ 2752 Property target used to set the pre- and post-action hooks list. 2753 Either the value must be C{None} or each element must be an C{ActionHook}. 2754 @raise ValueError: If the value is not a C{CommandOverride} 2755 """ 2756 if value is None: 2757 self._hooks = None 2758 else: 2759 try: 2760 saved = self._hooks 2761 self._hooks = ObjectTypeList(ActionHook, "ActionHook") 2762 self._hooks.extend(value) 2763 except Exception, e: 2764 self._hooks = saved 2765 raise e
    2766
    2767 - def _getHooks(self):
    2768 """ 2769 Property target used to get the command path hooks list. 2770 """ 2771 return self._hooks
    2772
    2773 - def _setManagedActions(self, value):
    2774 """ 2775 Property target used to set the managed actions list. 2776 Elements do not have to exist on disk at the time of assignment. 2777 """ 2778 if value is None: 2779 self._managedActions = None 2780 else: 2781 try: 2782 saved = self._managedActions 2783 self._managedActions = RegexMatchList(ACTION_NAME_REGEX, emptyAllowed=False, prefix="Action name") 2784 self._managedActions.extend(value) 2785 except Exception, e: 2786 self._managedActions = saved 2787 raise e
    2788
    2789 - def _getManagedActions(self):
    2790 """ 2791 Property target used to get the managed actions list. 2792 """ 2793 return self._managedActions
    2794 2795 startingDay = property(_getStartingDay, _setStartingDay, None, "Day that starts the week.") 2796 workingDir = property(_getWorkingDir, _setWorkingDir, None, "Working (temporary) directory to use for backups.") 2797 backupUser = property(_getBackupUser, _setBackupUser, None, "Effective user that backups should run as.") 2798 backupGroup = property(_getBackupGroup, _setBackupGroup, None, "Effective group that backups should run as.") 2799 rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "Default rcp-compatible copy command for staging.") 2800 rshCommand = property(_getRshCommand, _setRshCommand, None, "Default rsh-compatible command to use for remote shells.") 2801 cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "Default cback-compatible command to use on managed remote peers.") 2802 overrides = property(_getOverrides, _setOverrides, None, "List of configured command path overrides, if any.") 2803 hooks = property(_getHooks, _setHooks, None, "List of configured pre- and post-action hooks.") 2804 managedActions = property(_getManagedActions, _setManagedActions, None, "Default set of actions that are managed on remote peers.")
    2805
    2806 2807 ######################################################################## 2808 # PeersConfig class definition 2809 ######################################################################## 2810 2811 -class PeersConfig(object):
    2812 2813 """ 2814 Class representing Cedar Backup global peer configuration. 2815 2816 This section contains a list of local and remote peers in a master's backup 2817 pool. The section is optional. If a master does not define this section, 2818 then all peers are unmanaged, and the stage configuration section must 2819 explicitly list any peer that is to be staged. If this section is 2820 configured, then peers may be managed or unmanaged, and the stage section 2821 peer configuration (if any) completely overrides this configuration. 2822 2823 The following restrictions exist on data in this class: 2824 2825 - The list of local peers must contain only C{LocalPeer} objects 2826 - The list of remote peers must contain only C{RemotePeer} objects 2827 2828 @note: Lists within this class are "unordered" for equality comparisons. 2829 2830 @sort: __init__, __repr__, __str__, __cmp__, localPeers, remotePeers 2831 """ 2832
    2833 - def __init__(self, localPeers=None, remotePeers=None):
    2834 """ 2835 Constructor for the C{PeersConfig} class. 2836 2837 @param localPeers: List of local peers. 2838 @param remotePeers: List of remote peers. 2839 2840 @raise ValueError: If one of the values is invalid. 2841 """ 2842 self._localPeers = None 2843 self._remotePeers = None 2844 self.localPeers = localPeers 2845 self.remotePeers = remotePeers
    2846
    2847 - def __repr__(self):
    2848 """ 2849 Official string representation for class instance. 2850 """ 2851 return "PeersConfig(%s, %s)" % (self.localPeers, self.remotePeers)
    2852
    2853 - def __str__(self):
    2854 """ 2855 Informal string representation for class instance. 2856 """ 2857 return self.__repr__()
    2858
    2859 - def __cmp__(self, other):
    2860 """ 2861 Definition of equals operator for this class. 2862 Lists within this class are "unordered" for equality comparisons. 2863 @param other: Other object to compare to. 2864 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 2865 """ 2866 if other is None: 2867 return 1 2868 if self.localPeers != other.localPeers: 2869 if self.localPeers < other.localPeers: 2870 return -1 2871 else: 2872 return 1 2873 if self.remotePeers != other.remotePeers: 2874 if self.remotePeers < other.remotePeers: 2875 return -1 2876 else: 2877 return 1 2878 return 0
    2879
    2880 - def hasPeers(self):
    2881 """ 2882 Indicates whether any peers are filled into this object. 2883 @return: Boolean true if any local or remote peers are filled in, false otherwise. 2884 """ 2885 return ((self.localPeers is not None and len(self.localPeers) > 0) or 2886 (self.remotePeers is not None and len(self.remotePeers) > 0))
    2887
    2888 - def _setLocalPeers(self, value):
    2889 """ 2890 Property target used to set the local peers list. 2891 Either the value must be C{None} or each element must be a C{LocalPeer}. 2892 @raise ValueError: If the value is not an absolute path. 2893 """ 2894 if value is None: 2895 self._localPeers = None 2896 else: 2897 try: 2898 saved = self._localPeers 2899 self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") 2900 self._localPeers.extend(value) 2901 except Exception, e: 2902 self._localPeers = saved 2903 raise e
    2904
    2905 - def _getLocalPeers(self):
    2906 """ 2907 Property target used to get the local peers list. 2908 """ 2909 return self._localPeers
    2910
    2911 - def _setRemotePeers(self, value):
    2912 """ 2913 Property target used to set the remote peers list. 2914 Either the value must be C{None} or each element must be a C{RemotePeer}. 2915 @raise ValueError: If the value is not a C{RemotePeer} 2916 """ 2917 if value is None: 2918 self._remotePeers = None 2919 else: 2920 try: 2921 saved = self._remotePeers 2922 self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") 2923 self._remotePeers.extend(value) 2924 except Exception, e: 2925 self._remotePeers = saved 2926 raise e
    2927
    2928 - def _getRemotePeers(self):
    2929 """ 2930 Property target used to get the remote peers list. 2931 """ 2932 return self._remotePeers
    2933 2934 localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") 2935 remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.")
    2936
    2937 2938 ######################################################################## 2939 # CollectConfig class definition 2940 ######################################################################## 2941 2942 -class CollectConfig(object):
    2943 2944 """ 2945 Class representing a Cedar Backup collect configuration. 2946 2947 The following restrictions exist on data in this class: 2948 2949 - The target directory must be an absolute path. 2950 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 2951 - The archive mode must be one of the values in L{VALID_ARCHIVE_MODES}. 2952 - The ignore file must be a non-empty string. 2953 - Each of the paths in C{absoluteExcludePaths} must be an absolute path 2954 - The collect file list must be a list of C{CollectFile} objects. 2955 - The collect directory list must be a list of C{CollectDir} objects. 2956 2957 For the C{absoluteExcludePaths} list, validation is accomplished through the 2958 L{util.AbsolutePathList} list implementation that overrides common list 2959 methods and transparently does the absolute path validation for us. 2960 2961 For the C{collectFiles} and C{collectDirs} list, validation is accomplished 2962 through the L{util.ObjectTypeList} list implementation that overrides common 2963 list methods and transparently ensures that each element has an appropriate 2964 type. 2965 2966 @note: Lists within this class are "unordered" for equality comparisons. 2967 2968 @sort: __init__, __repr__, __str__, __cmp__, targetDir, 2969 collectMode, archiveMode, ignoreFile, absoluteExcludePaths, 2970 excludePatterns, collectFiles, collectDirs 2971 """ 2972
    2973 - def __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, 2974 absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, 2975 collectDirs=None):
    2976 """ 2977 Constructor for the C{CollectConfig} class. 2978 2979 @param targetDir: Directory to collect files into. 2980 @param collectMode: Default collect mode. 2981 @param archiveMode: Default archive mode for collect files. 2982 @param ignoreFile: Default ignore file name. 2983 @param absoluteExcludePaths: List of absolute paths to exclude. 2984 @param excludePatterns: List of regular expression patterns to exclude. 2985 @param collectFiles: List of collect files. 2986 @param collectDirs: List of collect directories. 2987 2988 @raise ValueError: If one of the values is invalid. 2989 """ 2990 self._targetDir = None 2991 self._collectMode = None 2992 self._archiveMode = None 2993 self._ignoreFile = None 2994 self._absoluteExcludePaths = None 2995 self._excludePatterns = None 2996 self._collectFiles = None 2997 self._collectDirs = None 2998 self.targetDir = targetDir 2999 self.collectMode = collectMode 3000 self.archiveMode = archiveMode 3001 self.ignoreFile = ignoreFile 3002 self.absoluteExcludePaths = absoluteExcludePaths 3003 self.excludePatterns = excludePatterns 3004 self.collectFiles = collectFiles 3005 self.collectDirs = collectDirs
    3006
    3007 - def __repr__(self):
    3008 """ 3009 Official string representation for class instance. 3010 """ 3011 return "CollectConfig(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.targetDir, self.collectMode, self.archiveMode, 3012 self.ignoreFile, self.absoluteExcludePaths, 3013 self.excludePatterns, self.collectFiles, self.collectDirs)
    3014
    3015 - def __str__(self):
    3016 """ 3017 Informal string representation for class instance. 3018 """ 3019 return self.__repr__()
    3020
    3021 - def __cmp__(self, other):
    3022 """ 3023 Definition of equals operator for this class. 3024 Lists within this class are "unordered" for equality comparisons. 3025 @param other: Other object to compare to. 3026 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3027 """ 3028 if other is None: 3029 return 1 3030 if self.targetDir != other.targetDir: 3031 if self.targetDir < other.targetDir: 3032 return -1 3033 else: 3034 return 1 3035 if self.collectMode != other.collectMode: 3036 if self.collectMode < other.collectMode: 3037 return -1 3038 else: 3039 return 1 3040 if self.archiveMode != other.archiveMode: 3041 if self.archiveMode < other.archiveMode: 3042 return -1 3043 else: 3044 return 1 3045 if self.ignoreFile != other.ignoreFile: 3046 if self.ignoreFile < other.ignoreFile: 3047 return -1 3048 else: 3049 return 1 3050 if self.absoluteExcludePaths != other.absoluteExcludePaths: 3051 if self.absoluteExcludePaths < other.absoluteExcludePaths: 3052 return -1 3053 else: 3054 return 1 3055 if self.excludePatterns != other.excludePatterns: 3056 if self.excludePatterns < other.excludePatterns: 3057 return -1 3058 else: 3059 return 1 3060 if self.collectFiles != other.collectFiles: 3061 if self.collectFiles < other.collectFiles: 3062 return -1 3063 else: 3064 return 1 3065 if self.collectDirs != other.collectDirs: 3066 if self.collectDirs < other.collectDirs: 3067 return -1 3068 else: 3069 return 1 3070 return 0
    3071
    3072 - def _setTargetDir(self, value):
    3073 """ 3074 Property target used to set the target directory. 3075 The value must be an absolute path if it is not C{None}. 3076 It does not have to exist on disk at the time of assignment. 3077 @raise ValueError: If the value is not an absolute path. 3078 @raise ValueError: If the value cannot be encoded properly. 3079 """ 3080 if value is not None: 3081 if not os.path.isabs(value): 3082 raise ValueError("Target directory must be an absolute path.") 3083 self._targetDir = encodePath(value)
    3084
    3085 - def _getTargetDir(self):
    3086 """ 3087 Property target used to get the target directory. 3088 """ 3089 return self._targetDir
    3090
    3091 - def _setCollectMode(self, value):
    3092 """ 3093 Property target used to set the collect mode. 3094 If not C{None}, the mode must be one of L{VALID_COLLECT_MODES}. 3095 @raise ValueError: If the value is not valid. 3096 """ 3097 if value is not None: 3098 if value not in VALID_COLLECT_MODES: 3099 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 3100 self._collectMode = value
    3101
    3102 - def _getCollectMode(self):
    3103 """ 3104 Property target used to get the collect mode. 3105 """ 3106 return self._collectMode
    3107
    3108 - def _setArchiveMode(self, value):
    3109 """ 3110 Property target used to set the archive mode. 3111 If not C{None}, the mode must be one of L{VALID_ARCHIVE_MODES}. 3112 @raise ValueError: If the value is not valid. 3113 """ 3114 if value is not None: 3115 if value not in VALID_ARCHIVE_MODES: 3116 raise ValueError("Archive mode must be one of %s." % VALID_ARCHIVE_MODES) 3117 self._archiveMode = value
    3118
    3119 - def _getArchiveMode(self):
    3120 """ 3121 Property target used to get the archive mode. 3122 """ 3123 return self._archiveMode
    3124
    3125 - def _setIgnoreFile(self, value):
    3126 """ 3127 Property target used to set the ignore file. 3128 The value must be a non-empty string if it is not C{None}. 3129 @raise ValueError: If the value is an empty string. 3130 @raise ValueError: If the value cannot be encoded properly. 3131 """ 3132 if value is not None: 3133 if len(value) < 1: 3134 raise ValueError("The ignore file must be a non-empty string.") 3135 self._ignoreFile = encodePath(value)
    3136
    3137 - def _getIgnoreFile(self):
    3138 """ 3139 Property target used to get the ignore file. 3140 """ 3141 return self._ignoreFile
    3142
    3143 - def _setAbsoluteExcludePaths(self, value):
    3144 """ 3145 Property target used to set the absolute exclude paths list. 3146 Either the value must be C{None} or each element must be an absolute path. 3147 Elements do not have to exist on disk at the time of assignment. 3148 @raise ValueError: If the value is not an absolute path. 3149 """ 3150 if value is None: 3151 self._absoluteExcludePaths = None 3152 else: 3153 try: 3154 saved = self._absoluteExcludePaths 3155 self._absoluteExcludePaths = AbsolutePathList() 3156 self._absoluteExcludePaths.extend(value) 3157 except Exception, e: 3158 self._absoluteExcludePaths = saved 3159 raise e
    3160
    3161 - def _getAbsoluteExcludePaths(self):
    3162 """ 3163 Property target used to get the absolute exclude paths list. 3164 """ 3165 return self._absoluteExcludePaths
    3166
    3167 - def _setExcludePatterns(self, value):
    3168 """ 3169 Property target used to set the exclude patterns list. 3170 """ 3171 if value is None: 3172 self._excludePatterns = None 3173 else: 3174 try: 3175 saved = self._excludePatterns 3176 self._excludePatterns = RegexList() 3177 self._excludePatterns.extend(value) 3178 except Exception, e: 3179 self._excludePatterns = saved 3180 raise e
    3181
    3182 - def _getExcludePatterns(self):
    3183 """ 3184 Property target used to get the exclude patterns list. 3185 """ 3186 return self._excludePatterns
    3187
    3188 - def _setCollectFiles(self, value):
    3189 """ 3190 Property target used to set the collect files list. 3191 Either the value must be C{None} or each element must be a C{CollectFile}. 3192 @raise ValueError: If the value is not a C{CollectFile} 3193 """ 3194 if value is None: 3195 self._collectFiles = None 3196 else: 3197 try: 3198 saved = self._collectFiles 3199 self._collectFiles = ObjectTypeList(CollectFile, "CollectFile") 3200 self._collectFiles.extend(value) 3201 except Exception, e: 3202 self._collectFiles = saved 3203 raise e
    3204
    3205 - def _getCollectFiles(self):
    3206 """ 3207 Property target used to get the collect files list. 3208 """ 3209 return self._collectFiles
    3210
    3211 - def _setCollectDirs(self, value):
    3212 """ 3213 Property target used to set the collect dirs list. 3214 Either the value must be C{None} or each element must be a C{CollectDir}. 3215 @raise ValueError: If the value is not a C{CollectDir} 3216 """ 3217 if value is None: 3218 self._collectDirs = None 3219 else: 3220 try: 3221 saved = self._collectDirs 3222 self._collectDirs = ObjectTypeList(CollectDir, "CollectDir") 3223 self._collectDirs.extend(value) 3224 except Exception, e: 3225 self._collectDirs = saved 3226 raise e
    3227
    3228 - def _getCollectDirs(self):
    3229 """ 3230 Property target used to get the collect dirs list. 3231 """ 3232 return self._collectDirs
    3233 3234 targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to collect files into.") 3235 collectMode = property(_getCollectMode, _setCollectMode, None, "Default collect mode.") 3236 archiveMode = property(_getArchiveMode, _setArchiveMode, None, "Default archive mode for collect files.") 3237 ignoreFile = property(_getIgnoreFile, _setIgnoreFile, None, "Default ignore file name.") 3238 absoluteExcludePaths = property(_getAbsoluteExcludePaths, _setAbsoluteExcludePaths, None, "List of absolute paths to exclude.") 3239 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expressions patterns to exclude.") 3240 collectFiles = property(_getCollectFiles, _setCollectFiles, None, "List of collect files.") 3241 collectDirs = property(_getCollectDirs, _setCollectDirs, None, "List of collect directories.")
    3242
    3243 3244 ######################################################################## 3245 # StageConfig class definition 3246 ######################################################################## 3247 3248 -class StageConfig(object):
    3249 3250 """ 3251 Class representing a Cedar Backup stage configuration. 3252 3253 The following restrictions exist on data in this class: 3254 3255 - The target directory must be an absolute path 3256 - The list of local peers must contain only C{LocalPeer} objects 3257 - The list of remote peers must contain only C{RemotePeer} objects 3258 3259 @note: Lists within this class are "unordered" for equality comparisons. 3260 3261 @sort: __init__, __repr__, __str__, __cmp__, targetDir, localPeers, remotePeers 3262 """ 3263
    3264 - def __init__(self, targetDir=None, localPeers=None, remotePeers=None):
    3265 """ 3266 Constructor for the C{StageConfig} class. 3267 3268 @param targetDir: Directory to stage files into, by peer name. 3269 @param localPeers: List of local peers. 3270 @param remotePeers: List of remote peers. 3271 3272 @raise ValueError: If one of the values is invalid. 3273 """ 3274 self._targetDir = None 3275 self._localPeers = None 3276 self._remotePeers = None 3277 self.targetDir = targetDir 3278 self.localPeers = localPeers 3279 self.remotePeers = remotePeers
    3280
    3281 - def __repr__(self):
    3282 """ 3283 Official string representation for class instance. 3284 """ 3285 return "StageConfig(%s, %s, %s)" % (self.targetDir, self.localPeers, self.remotePeers)
    3286
    3287 - def __str__(self):
    3288 """ 3289 Informal string representation for class instance. 3290 """ 3291 return self.__repr__()
    3292
    3293 - def __cmp__(self, other):
    3294 """ 3295 Definition of equals operator for this class. 3296 Lists within this class are "unordered" for equality comparisons. 3297 @param other: Other object to compare to. 3298 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3299 """ 3300 if other is None: 3301 return 1 3302 if self.targetDir != other.targetDir: 3303 if self.targetDir < other.targetDir: 3304 return -1 3305 else: 3306 return 1 3307 if self.localPeers != other.localPeers: 3308 if self.localPeers < other.localPeers: 3309 return -1 3310 else: 3311 return 1 3312 if self.remotePeers != other.remotePeers: 3313 if self.remotePeers < other.remotePeers: 3314 return -1 3315 else: 3316 return 1 3317 return 0
    3318
    3319 - def hasPeers(self):
    3320 """ 3321 Indicates whether any peers are filled into this object. 3322 @return: Boolean true if any local or remote peers are filled in, false otherwise. 3323 """ 3324 return ((self.localPeers is not None and len(self.localPeers) > 0) or 3325 (self.remotePeers is not None and len(self.remotePeers) > 0))
    3326
    3327 - def _setTargetDir(self, value):
    3328 """ 3329 Property target used to set the target directory. 3330 The value must be an absolute path if it is not C{None}. 3331 It does not have to exist on disk at the time of assignment. 3332 @raise ValueError: If the value is not an absolute path. 3333 @raise ValueError: If the value cannot be encoded properly. 3334 """ 3335 if value is not None: 3336 if not os.path.isabs(value): 3337 raise ValueError("Target directory must be an absolute path.") 3338 self._targetDir = encodePath(value)
    3339
    3340 - def _getTargetDir(self):
    3341 """ 3342 Property target used to get the target directory. 3343 """ 3344 return self._targetDir
    3345
    3346 - def _setLocalPeers(self, value):
    3347 """ 3348 Property target used to set the local peers list. 3349 Either the value must be C{None} or each element must be a C{LocalPeer}. 3350 @raise ValueError: If the value is not an absolute path. 3351 """ 3352 if value is None: 3353 self._localPeers = None 3354 else: 3355 try: 3356 saved = self._localPeers 3357 self._localPeers = ObjectTypeList(LocalPeer, "LocalPeer") 3358 self._localPeers.extend(value) 3359 except Exception, e: 3360 self._localPeers = saved 3361 raise e
    3362
    3363 - def _getLocalPeers(self):
    3364 """ 3365 Property target used to get the local peers list. 3366 """ 3367 return self._localPeers
    3368
    3369 - def _setRemotePeers(self, value):
    3370 """ 3371 Property target used to set the remote peers list. 3372 Either the value must be C{None} or each element must be a C{RemotePeer}. 3373 @raise ValueError: If the value is not a C{RemotePeer} 3374 """ 3375 if value is None: 3376 self._remotePeers = None 3377 else: 3378 try: 3379 saved = self._remotePeers 3380 self._remotePeers = ObjectTypeList(RemotePeer, "RemotePeer") 3381 self._remotePeers.extend(value) 3382 except Exception, e: 3383 self._remotePeers = saved 3384 raise e
    3385
    3386 - def _getRemotePeers(self):
    3387 """ 3388 Property target used to get the remote peers list. 3389 """ 3390 return self._remotePeers
    3391 3392 targetDir = property(_getTargetDir, _setTargetDir, None, "Directory to stage files into, by peer name.") 3393 localPeers = property(_getLocalPeers, _setLocalPeers, None, "List of local peers.") 3394 remotePeers = property(_getRemotePeers, _setRemotePeers, None, "List of remote peers.")
    3395
    3396 3397 ######################################################################## 3398 # StoreConfig class definition 3399 ######################################################################## 3400 3401 -class StoreConfig(object):
    3402 3403 """ 3404 Class representing a Cedar Backup store configuration. 3405 3406 The following restrictions exist on data in this class: 3407 3408 - The source directory must be an absolute path. 3409 - The media type must be one of the values in L{VALID_MEDIA_TYPES}. 3410 - The device type must be one of the values in L{VALID_DEVICE_TYPES}. 3411 - The device path must be an absolute path. 3412 - The SCSI id, if provided, must be in the form specified by L{validateScsiId}. 3413 - The drive speed must be an integer >= 1 3414 - The blanking behavior must be a C{BlankBehavior} object 3415 - The refresh media delay must be an integer >= 0 3416 - The eject delay must be an integer >= 0 3417 3418 Note that although the blanking factor must be a positive floating point 3419 number, it is stored as a string. This is done so that we can losslessly go 3420 back and forth between XML and object representations of configuration. 3421 3422 @sort: __init__, __repr__, __str__, __cmp__, sourceDir, 3423 mediaType, deviceType, devicePath, deviceScsiId, 3424 driveSpeed, checkData, checkMedia, warnMidnite, noEject, 3425 blankBehavior, refreshMediaDelay, ejectDelay 3426 """ 3427
    3428 - def __init__(self, sourceDir=None, mediaType=None, deviceType=None, 3429 devicePath=None, deviceScsiId=None, driveSpeed=None, 3430 checkData=False, warnMidnite=False, noEject=False, 3431 checkMedia=False, blankBehavior=None, refreshMediaDelay=None, 3432 ejectDelay=None):
    3433 """ 3434 Constructor for the C{StoreConfig} class. 3435 3436 @param sourceDir: Directory whose contents should be written to media. 3437 @param mediaType: Type of the media (see notes above). 3438 @param deviceType: Type of the device (optional, see notes above). 3439 @param devicePath: Filesystem device name for writer device, i.e. C{/dev/cdrw}. 3440 @param deviceScsiId: SCSI id for writer device, i.e. C{[<method>:]scsibus,target,lun}. 3441 @param driveSpeed: Speed of the drive, i.e. C{2} for 2x drive, etc. 3442 @param checkData: Whether resulting image should be validated. 3443 @param checkMedia: Whether media should be checked before being written to. 3444 @param warnMidnite: Whether to generate warnings for crossing midnite. 3445 @param noEject: Indicates that the writer device should not be ejected. 3446 @param blankBehavior: Controls optimized blanking behavior. 3447 @param refreshMediaDelay: Delay, in seconds, to add after refreshing media 3448 @param ejectDelay: Delay, in seconds, to add after ejecting media before closing the tray 3449 3450 @raise ValueError: If one of the values is invalid. 3451 """ 3452 self._sourceDir = None 3453 self._mediaType = None 3454 self._deviceType = None 3455 self._devicePath = None 3456 self._deviceScsiId = None 3457 self._driveSpeed = None 3458 self._checkData = None 3459 self._checkMedia = None 3460 self._warnMidnite = None 3461 self._noEject = None 3462 self._blankBehavior = None 3463 self._refreshMediaDelay = None 3464 self._ejectDelay = None 3465 self.sourceDir = sourceDir 3466 self.mediaType = mediaType 3467 self.deviceType = deviceType 3468 self.devicePath = devicePath 3469 self.deviceScsiId = deviceScsiId 3470 self.driveSpeed = driveSpeed 3471 self.checkData = checkData 3472 self.checkMedia = checkMedia 3473 self.warnMidnite = warnMidnite 3474 self.noEject = noEject 3475 self.blankBehavior = blankBehavior 3476 self.refreshMediaDelay = refreshMediaDelay 3477 self.ejectDelay = ejectDelay
    3478
    3479 - def __repr__(self):
    3480 """ 3481 Official string representation for class instance. 3482 """ 3483 return "StoreConfig(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)" % ( 3484 self.sourceDir, self.mediaType, self.deviceType, 3485 self.devicePath, self.deviceScsiId, self.driveSpeed, 3486 self.checkData, self.warnMidnite, self.noEject, 3487 self.checkMedia, self.blankBehavior, self.refreshMediaDelay, 3488 self.ejectDelay)
    3489
    3490 - def __str__(self):
    3491 """ 3492 Informal string representation for class instance. 3493 """ 3494 return self.__repr__()
    3495
    3496 - def __cmp__(self, other):
    3497 """ 3498 Definition of equals operator for this class. 3499 @param other: Other object to compare to. 3500 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3501 """ 3502 if other is None: 3503 return 1 3504 if self.sourceDir != other.sourceDir: 3505 if self.sourceDir < other.sourceDir: 3506 return -1 3507 else: 3508 return 1 3509 if self.mediaType != other.mediaType: 3510 if self.mediaType < other.mediaType: 3511 return -1 3512 else: 3513 return 1 3514 if self.deviceType != other.deviceType: 3515 if self.deviceType < other.deviceType: 3516 return -1 3517 else: 3518 return 1 3519 if self.devicePath != other.devicePath: 3520 if self.devicePath < other.devicePath: 3521 return -1 3522 else: 3523 return 1 3524 if self.deviceScsiId != other.deviceScsiId: 3525 if self.deviceScsiId < other.deviceScsiId: 3526 return -1 3527 else: 3528 return 1 3529 if self.driveSpeed != other.driveSpeed: 3530 if self.driveSpeed < other.driveSpeed: 3531 return -1 3532 else: 3533 return 1 3534 if self.checkData != other.checkData: 3535 if self.checkData < other.checkData: 3536 return -1 3537 else: 3538 return 1 3539 if self.checkMedia != other.checkMedia: 3540 if self.checkMedia < other.checkMedia: 3541 return -1 3542 else: 3543 return 1 3544 if self.warnMidnite != other.warnMidnite: 3545 if self.warnMidnite < other.warnMidnite: 3546 return -1 3547 else: 3548 return 1 3549 if self.noEject != other.noEject: 3550 if self.noEject < other.noEject: 3551 return -1 3552 else: 3553 return 1 3554 if self.blankBehavior != other.blankBehavior: 3555 if self.blankBehavior < other.blankBehavior: 3556 return -1 3557 else: 3558 return 1 3559 if self.refreshMediaDelay != other.refreshMediaDelay: 3560 if self.refreshMediaDelay < other.refreshMediaDelay: 3561 return -1 3562 else: 3563 return 1 3564 if self.ejectDelay != other.ejectDelay: 3565 if self.ejectDelay < other.ejectDelay: 3566 return -1 3567 else: 3568 return 1 3569 return 0
    3570
    3571 - def _setSourceDir(self, value):
    3572 """ 3573 Property target used to set the source directory. 3574 The value must be an absolute path if it is not C{None}. 3575 It does not have to exist on disk at the time of assignment. 3576 @raise ValueError: If the value is not an absolute path. 3577 @raise ValueError: If the value cannot be encoded properly. 3578 """ 3579 if value is not None: 3580 if not os.path.isabs(value): 3581 raise ValueError("Source directory must be an absolute path.") 3582 self._sourceDir = encodePath(value)
    3583
    3584 - def _getSourceDir(self):
    3585 """ 3586 Property target used to get the source directory. 3587 """ 3588 return self._sourceDir
    3589
    3590 - def _setMediaType(self, value):
    3591 """ 3592 Property target used to set the media type. 3593 The value must be one of L{VALID_MEDIA_TYPES}. 3594 @raise ValueError: If the value is not valid. 3595 """ 3596 if value is not None: 3597 if value not in VALID_MEDIA_TYPES: 3598 raise ValueError("Media type must be one of %s." % VALID_MEDIA_TYPES) 3599 self._mediaType = value
    3600
    3601 - def _getMediaType(self):
    3602 """ 3603 Property target used to get the media type. 3604 """ 3605 return self._mediaType
    3606
    3607 - def _setDeviceType(self, value):
    3608 """ 3609 Property target used to set the device type. 3610 The value must be one of L{VALID_DEVICE_TYPES}. 3611 @raise ValueError: If the value is not valid. 3612 """ 3613 if value is not None: 3614 if value not in VALID_DEVICE_TYPES: 3615 raise ValueError("Device type must be one of %s." % VALID_DEVICE_TYPES) 3616 self._deviceType = value
    3617
    3618 - def _getDeviceType(self):
    3619 """ 3620 Property target used to get the device type. 3621 """ 3622 return self._deviceType
    3623
    3624 - def _setDevicePath(self, value):
    3625 """ 3626 Property target used to set the device path. 3627 The value must be an absolute path if it is not C{None}. 3628 It does not have to exist on disk at the time of assignment. 3629 @raise ValueError: If the value is not an absolute path. 3630 @raise ValueError: If the value cannot be encoded properly. 3631 """ 3632 if value is not None: 3633 if not os.path.isabs(value): 3634 raise ValueError("Device path must be an absolute path.") 3635 self._devicePath = encodePath(value)
    3636
    3637 - def _getDevicePath(self):
    3638 """ 3639 Property target used to get the device path. 3640 """ 3641 return self._devicePath
    3642
    3643 - def _setDeviceScsiId(self, value):
    3644 """ 3645 Property target used to set the SCSI id 3646 The SCSI id must be valid per L{validateScsiId}. 3647 @raise ValueError: If the value is not valid. 3648 """ 3649 if value is None: 3650 self._deviceScsiId = None 3651 else: 3652 self._deviceScsiId = validateScsiId(value)
    3653
    3654 - def _getDeviceScsiId(self):
    3655 """ 3656 Property target used to get the SCSI id. 3657 """ 3658 return self._deviceScsiId
    3659
    3660 - def _setDriveSpeed(self, value):
    3661 """ 3662 Property target used to set the drive speed. 3663 The drive speed must be valid per L{validateDriveSpeed}. 3664 @raise ValueError: If the value is not valid. 3665 """ 3666 self._driveSpeed = validateDriveSpeed(value)
    3667
    3668 - def _getDriveSpeed(self):
    3669 """ 3670 Property target used to get the drive speed. 3671 """ 3672 return self._driveSpeed
    3673
    3674 - def _setCheckData(self, value):
    3675 """ 3676 Property target used to set the check data flag. 3677 No validations, but we normalize the value to C{True} or C{False}. 3678 """ 3679 if value: 3680 self._checkData = True 3681 else: 3682 self._checkData = False
    3683
    3684 - def _getCheckData(self):
    3685 """ 3686 Property target used to get the check data flag. 3687 """ 3688 return self._checkData
    3689
    3690 - def _setCheckMedia(self, value):
    3691 """ 3692 Property target used to set the check media flag. 3693 No validations, but we normalize the value to C{True} or C{False}. 3694 """ 3695 if value: 3696 self._checkMedia = True 3697 else: 3698 self._checkMedia = False
    3699
    3700 - def _getCheckMedia(self):
    3701 """ 3702 Property target used to get the check media flag. 3703 """ 3704 return self._checkMedia
    3705
    3706 - def _setWarnMidnite(self, value):
    3707 """ 3708 Property target used to set the midnite warning flag. 3709 No validations, but we normalize the value to C{True} or C{False}. 3710 """ 3711 if value: 3712 self._warnMidnite = True 3713 else: 3714 self._warnMidnite = False
    3715
    3716 - def _getWarnMidnite(self):
    3717 """ 3718 Property target used to get the midnite warning flag. 3719 """ 3720 return self._warnMidnite
    3721
    3722 - def _setNoEject(self, value):
    3723 """ 3724 Property target used to set the no-eject flag. 3725 No validations, but we normalize the value to C{True} or C{False}. 3726 """ 3727 if value: 3728 self._noEject = True 3729 else: 3730 self._noEject = False
    3731
    3732 - def _getNoEject(self):
    3733 """ 3734 Property target used to get the no-eject flag. 3735 """ 3736 return self._noEject
    3737
    3738 - def _setBlankBehavior(self, value):
    3739 """ 3740 Property target used to set blanking behavior configuration. 3741 If not C{None}, the value must be a C{BlankBehavior} object. 3742 @raise ValueError: If the value is not a C{BlankBehavior} 3743 """ 3744 if value is None: 3745 self._blankBehavior = None 3746 else: 3747 if not isinstance(value, BlankBehavior): 3748 raise ValueError("Value must be a C{BlankBehavior} object.") 3749 self._blankBehavior = value
    3750
    3751 - def _getBlankBehavior(self):
    3752 """ 3753 Property target used to get the blanking behavior configuration. 3754 """ 3755 return self._blankBehavior
    3756
    3757 - def _setRefreshMediaDelay(self, value):
    3758 """ 3759 Property target used to set the refreshMediaDelay. 3760 The value must be an integer >= 0. 3761 @raise ValueError: If the value is not valid. 3762 """ 3763 if value is None: 3764 self._refreshMediaDelay = None 3765 else: 3766 try: 3767 value = int(value) 3768 except TypeError: 3769 raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") 3770 if value < 0: 3771 raise ValueError("Action refreshMediaDelay value must be an integer >= 0.") 3772 if value == 0: 3773 value = None # normalize this out, since it's the default 3774 self._refreshMediaDelay = value
    3775
    3776 - def _getRefreshMediaDelay(self):
    3777 """ 3778 Property target used to get the action refreshMediaDelay. 3779 """ 3780 return self._refreshMediaDelay
    3781
    3782 - def _setEjectDelay(self, value):
    3783 """ 3784 Property target used to set the ejectDelay. 3785 The value must be an integer >= 0. 3786 @raise ValueError: If the value is not valid. 3787 """ 3788 if value is None: 3789 self._ejectDelay = None 3790 else: 3791 try: 3792 value = int(value) 3793 except TypeError: 3794 raise ValueError("Action ejectDelay value must be an integer >= 0.") 3795 if value < 0: 3796 raise ValueError("Action ejectDelay value must be an integer >= 0.") 3797 if value == 0: 3798 value = None # normalize this out, since it's the default 3799 self._ejectDelay = value
    3800
    3801 - def _getEjectDelay(self):
    3802 """ 3803 Property target used to get the action ejectDelay. 3804 """ 3805 return self._ejectDelay
    3806 3807 sourceDir = property(_getSourceDir, _setSourceDir, None, "Directory whose contents should be written to media.") 3808 mediaType = property(_getMediaType, _setMediaType, None, "Type of the media (see notes above).") 3809 deviceType = property(_getDeviceType, _setDeviceType, None, "Type of the device (optional, see notes above).") 3810 devicePath = property(_getDevicePath, _setDevicePath, None, "Filesystem device name for writer device.") 3811 deviceScsiId = property(_getDeviceScsiId, _setDeviceScsiId, None, "SCSI id for writer device (optional, see notes above).") 3812 driveSpeed = property(_getDriveSpeed, _setDriveSpeed, None, "Speed of the drive.") 3813 checkData = property(_getCheckData, _setCheckData, None, "Whether resulting image should be validated.") 3814 checkMedia = property(_getCheckMedia, _setCheckMedia, None, "Whether media should be checked before being written to.") 3815 warnMidnite = property(_getWarnMidnite, _setWarnMidnite, None, "Whether to generate warnings for crossing midnite.") 3816 noEject = property(_getNoEject, _setNoEject, None, "Indicates that the writer device should not be ejected.") 3817 blankBehavior = property(_getBlankBehavior, _setBlankBehavior, None, "Controls optimized blanking behavior.") 3818 refreshMediaDelay = property(_getRefreshMediaDelay, _setRefreshMediaDelay, None, "Delay, in seconds, to add after refreshing media.") 3819 ejectDelay = property(_getEjectDelay, _setEjectDelay, None, "Delay, in seconds, to add after ejecting media before closing the tray")
    3820
    3821 3822 ######################################################################## 3823 # PurgeConfig class definition 3824 ######################################################################## 3825 3826 -class PurgeConfig(object):
    3827 3828 """ 3829 Class representing a Cedar Backup purge configuration. 3830 3831 The following restrictions exist on data in this class: 3832 3833 - The purge directory list must be a list of C{PurgeDir} objects. 3834 3835 For the C{purgeDirs} list, validation is accomplished through the 3836 L{util.ObjectTypeList} list implementation that overrides common list 3837 methods and transparently ensures that each element is a C{PurgeDir}. 3838 3839 @note: Lists within this class are "unordered" for equality comparisons. 3840 3841 @sort: __init__, __repr__, __str__, __cmp__, purgeDirs 3842 """ 3843
    3844 - def __init__(self, purgeDirs=None):
    3845 """ 3846 Constructor for the C{Purge} class. 3847 @param purgeDirs: List of purge directories. 3848 @raise ValueError: If one of the values is invalid. 3849 """ 3850 self._purgeDirs = None 3851 self.purgeDirs = purgeDirs
    3852
    3853 - def __repr__(self):
    3854 """ 3855 Official string representation for class instance. 3856 """ 3857 return "PurgeConfig(%s)" % self.purgeDirs
    3858
    3859 - def __str__(self):
    3860 """ 3861 Informal string representation for class instance. 3862 """ 3863 return self.__repr__()
    3864
    3865 - def __cmp__(self, other):
    3866 """ 3867 Definition of equals operator for this class. 3868 Lists within this class are "unordered" for equality comparisons. 3869 @param other: Other object to compare to. 3870 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 3871 """ 3872 if other is None: 3873 return 1 3874 if self.purgeDirs != other.purgeDirs: 3875 if self.purgeDirs < other.purgeDirs: 3876 return -1 3877 else: 3878 return 1 3879 return 0
    3880
    3881 - def _setPurgeDirs(self, value):
    3882 """ 3883 Property target used to set the purge dirs list. 3884 Either the value must be C{None} or each element must be a C{PurgeDir}. 3885 @raise ValueError: If the value is not a C{PurgeDir} 3886 """ 3887 if value is None: 3888 self._purgeDirs = None 3889 else: 3890 try: 3891 saved = self._purgeDirs 3892 self._purgeDirs = ObjectTypeList(PurgeDir, "PurgeDir") 3893 self._purgeDirs.extend(value) 3894 except Exception, e: 3895 self._purgeDirs = saved 3896 raise e
    3897
    3898 - def _getPurgeDirs(self):
    3899 """ 3900 Property target used to get the purge dirs list. 3901 """ 3902 return self._purgeDirs
    3903 3904 purgeDirs = property(_getPurgeDirs, _setPurgeDirs, None, "List of directories to purge.")
    3905
    3906 3907 ######################################################################## 3908 # Config class definition 3909 ######################################################################## 3910 3911 -class Config(object):
    3912 3913 ###################### 3914 # Class documentation 3915 ###################### 3916 3917 """ 3918 Class representing a Cedar Backup XML configuration document. 3919 3920 The C{Config} class is a Python object representation of a Cedar Backup XML 3921 configuration file. It is intended to be the only Python-language interface 3922 to Cedar Backup configuration on disk for both Cedar Backup itself and for 3923 external applications. 3924 3925 The object representation is two-way: XML data can be used to create a 3926 C{Config} object, and then changes to the object can be propogated back to 3927 disk. A C{Config} object can even be used to create a configuration file 3928 from scratch programmatically. 3929 3930 This class and the classes it is composed from often use Python's 3931 C{property} construct to validate input and limit access to values. Some 3932 validations can only be done once a document is considered "complete" 3933 (see module notes for more details). 3934 3935 Assignments to the various instance variables must match the expected 3936 type, i.e. C{reference} must be a C{ReferenceConfig}. The internal check 3937 uses the built-in C{isinstance} function, so it should be OK to use 3938 subclasses if you want to. 3939 3940 If an instance variable is not set, its value will be C{None}. When an 3941 object is initialized without using an XML document, all of the values 3942 will be C{None}. Even when an object is initialized using XML, some of 3943 the values might be C{None} because not every section is required. 3944 3945 @note: Lists within this class are "unordered" for equality comparisons. 3946 3947 @sort: __init__, __repr__, __str__, __cmp__, extractXml, validate, 3948 reference, extensions, options, collect, stage, store, purge, 3949 _getReference, _setReference, _getExtensions, _setExtensions, 3950 _getOptions, _setOptions, _getPeers, _setPeers, _getCollect, 3951 _setCollect, _getStage, _setStage, _getStore, _setStore, 3952 _getPurge, _setPurge 3953 """ 3954 3955 ############## 3956 # Constructor 3957 ############## 3958
    3959 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    3960 """ 3961 Initializes a configuration object. 3962 3963 If you initialize the object without passing either C{xmlData} or 3964 C{xmlPath}, then configuration will be empty and will be invalid until it 3965 is filled in properly. 3966 3967 No reference to the original XML data or original path is saved off by 3968 this class. Once the data has been parsed (successfully or not) this 3969 original information is discarded. 3970 3971 Unless the C{validate} argument is C{False}, the L{Config.validate} 3972 method will be called (with its default arguments) against configuration 3973 after successfully parsing any passed-in XML. Keep in mind that even if 3974 C{validate} is C{False}, it might not be possible to parse the passed-in 3975 XML document if lower-level validations fail. 3976 3977 @note: It is strongly suggested that the C{validate} option always be set 3978 to C{True} (the default) unless there is a specific need to read in 3979 invalid configuration from disk. 3980 3981 @param xmlData: XML data representing configuration. 3982 @type xmlData: String data. 3983 3984 @param xmlPath: Path to an XML file on disk. 3985 @type xmlPath: Absolute path to a file on disk. 3986 3987 @param validate: Validate the document after parsing it. 3988 @type validate: Boolean true/false. 3989 3990 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 3991 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 3992 @raise ValueError: If the parsed configuration document is not valid. 3993 """ 3994 self._reference = None 3995 self._extensions = None 3996 self._options = None 3997 self._peers = None 3998 self._collect = None 3999 self._stage = None 4000 self._store = None 4001 self._purge = None 4002 self.reference = None 4003 self.extensions = None 4004 self.options = None 4005 self.peers = None 4006 self.collect = None 4007 self.stage = None 4008 self.store = None 4009 self.purge = None 4010 if xmlData is not None and xmlPath is not None: 4011 raise ValueError("Use either xmlData or xmlPath, but not both.") 4012 if xmlData is not None: 4013 self._parseXmlData(xmlData) 4014 if validate: 4015 self.validate() 4016 elif xmlPath is not None: 4017 xmlData = open(xmlPath).read() 4018 self._parseXmlData(xmlData) 4019 if validate: 4020 self.validate()
    4021 4022 4023 ######################### 4024 # String representations 4025 ######################### 4026
    4027 - def __repr__(self):
    4028 """ 4029 Official string representation for class instance. 4030 """ 4031 return "Config(%s, %s, %s, %s, %s, %s, %s, %s)" % (self.reference, self.extensions, self.options, 4032 self.peers, self.collect, self.stage, self.store, 4033 self.purge)
    4034
    4035 - def __str__(self):
    4036 """ 4037 Informal string representation for class instance. 4038 """ 4039 return self.__repr__()
    4040 4041 4042 ############################# 4043 # Standard comparison method 4044 ############################# 4045
    4046 - def __cmp__(self, other):
    4047 """ 4048 Definition of equals operator for this class. 4049 Lists within this class are "unordered" for equality comparisons. 4050 @param other: Other object to compare to. 4051 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 4052 """ 4053 if other is None: 4054 return 1 4055 if self.reference != other.reference: 4056 if self.reference < other.reference: 4057 return -1 4058 else: 4059 return 1 4060 if self.extensions != other.extensions: 4061 if self.extensions < other.extensions: 4062 return -1 4063 else: 4064 return 1 4065 if self.options != other.options: 4066 if self.options < other.options: 4067 return -1 4068 else: 4069 return 1 4070 if self.peers != other.peers: 4071 if self.peers < other.peers: 4072 return -1 4073 else: 4074 return 1 4075 if self.collect != other.collect: 4076 if self.collect < other.collect: 4077 return -1 4078 else: 4079 return 1 4080 if self.stage != other.stage: 4081 if self.stage < other.stage: 4082 return -1 4083 else: 4084 return 1 4085 if self.store != other.store: 4086 if self.store < other.store: 4087 return -1 4088 else: 4089 return 1 4090 if self.purge != other.purge: 4091 if self.purge < other.purge: 4092 return -1 4093 else: 4094 return 1 4095 return 0
    4096 4097 4098 ############# 4099 # Properties 4100 ############# 4101
    4102 - def _setReference(self, value):
    4103 """ 4104 Property target used to set the reference configuration value. 4105 If not C{None}, the value must be a C{ReferenceConfig} object. 4106 @raise ValueError: If the value is not a C{ReferenceConfig} 4107 """ 4108 if value is None: 4109 self._reference = None 4110 else: 4111 if not isinstance(value, ReferenceConfig): 4112 raise ValueError("Value must be a C{ReferenceConfig} object.") 4113 self._reference = value
    4114
    4115 - def _getReference(self):
    4116 """ 4117 Property target used to get the reference configuration value. 4118 """ 4119 return self._reference
    4120
    4121 - def _setExtensions(self, value):
    4122 """ 4123 Property target used to set the extensions configuration value. 4124 If not C{None}, the value must be a C{ExtensionsConfig} object. 4125 @raise ValueError: If the value is not a C{ExtensionsConfig} 4126 """ 4127 if value is None: 4128 self._extensions = None 4129 else: 4130 if not isinstance(value, ExtensionsConfig): 4131 raise ValueError("Value must be a C{ExtensionsConfig} object.") 4132 self._extensions = value
    4133
    4134 - def _getExtensions(self):
    4135 """ 4136 Property target used to get the extensions configuration value. 4137 """ 4138 return self._extensions
    4139
    4140 - def _setOptions(self, value):
    4141 """ 4142 Property target used to set the options configuration value. 4143 If not C{None}, the value must be an C{OptionsConfig} object. 4144 @raise ValueError: If the value is not a C{OptionsConfig} 4145 """ 4146 if value is None: 4147 self._options = None 4148 else: 4149 if not isinstance(value, OptionsConfig): 4150 raise ValueError("Value must be a C{OptionsConfig} object.") 4151 self._options = value
    4152
    4153 - def _getOptions(self):
    4154 """ 4155 Property target used to get the options configuration value. 4156 """ 4157 return self._options
    4158
    4159 - def _setPeers(self, value):
    4160 """ 4161 Property target used to set the peers configuration value. 4162 If not C{None}, the value must be an C{PeersConfig} object. 4163 @raise ValueError: If the value is not a C{PeersConfig} 4164 """ 4165 if value is None: 4166 self._peers = None 4167 else: 4168 if not isinstance(value, PeersConfig): 4169 raise ValueError("Value must be a C{PeersConfig} object.") 4170 self._peers = value
    4171
    4172 - def _getPeers(self):
    4173 """ 4174 Property target used to get the peers configuration value. 4175 """ 4176 return self._peers
    4177
    4178 - def _setCollect(self, value):
    4179 """ 4180 Property target used to set the collect configuration value. 4181 If not C{None}, the value must be a C{CollectConfig} object. 4182 @raise ValueError: If the value is not a C{CollectConfig} 4183 """ 4184 if value is None: 4185 self._collect = None 4186 else: 4187 if not isinstance(value, CollectConfig): 4188 raise ValueError("Value must be a C{CollectConfig} object.") 4189 self._collect = value
    4190
    4191 - def _getCollect(self):
    4192 """ 4193 Property target used to get the collect configuration value. 4194 """ 4195 return self._collect
    4196
    4197 - def _setStage(self, value):
    4198 """ 4199 Property target used to set the stage configuration value. 4200 If not C{None}, the value must be a C{StageConfig} object. 4201 @raise ValueError: If the value is not a C{StageConfig} 4202 """ 4203 if value is None: 4204 self._stage = None 4205 else: 4206 if not isinstance(value, StageConfig): 4207 raise ValueError("Value must be a C{StageConfig} object.") 4208 self._stage = value
    4209
    4210 - def _getStage(self):
    4211 """ 4212 Property target used to get the stage configuration value. 4213 """ 4214 return self._stage
    4215
    4216 - def _setStore(self, value):
    4217 """ 4218 Property target used to set the store configuration value. 4219 If not C{None}, the value must be a C{StoreConfig} object. 4220 @raise ValueError: If the value is not a C{StoreConfig} 4221 """ 4222 if value is None: 4223 self._store = None 4224 else: 4225 if not isinstance(value, StoreConfig): 4226 raise ValueError("Value must be a C{StoreConfig} object.") 4227 self._store = value
    4228
    4229 - def _getStore(self):
    4230 """ 4231 Property target used to get the store configuration value. 4232 """ 4233 return self._store
    4234
    4235 - def _setPurge(self, value):
    4236 """ 4237 Property target used to set the purge configuration value. 4238 If not C{None}, the value must be a C{PurgeConfig} object. 4239 @raise ValueError: If the value is not a C{PurgeConfig} 4240 """ 4241 if value is None: 4242 self._purge = None 4243 else: 4244 if not isinstance(value, PurgeConfig): 4245 raise ValueError("Value must be a C{PurgeConfig} object.") 4246 self._purge = value
    4247
    4248 - def _getPurge(self):
    4249 """ 4250 Property target used to get the purge configuration value. 4251 """ 4252 return self._purge
    4253 4254 reference = property(_getReference, _setReference, None, "Reference configuration in terms of a C{ReferenceConfig} object.") 4255 extensions = property(_getExtensions, _setExtensions, None, "Extensions configuration in terms of a C{ExtensionsConfig} object.") 4256 options = property(_getOptions, _setOptions, None, "Options configuration in terms of a C{OptionsConfig} object.") 4257 peers = property(_getPeers, _setPeers, None, "Peers configuration in terms of a C{PeersConfig} object.") 4258 collect = property(_getCollect, _setCollect, None, "Collect configuration in terms of a C{CollectConfig} object.") 4259 stage = property(_getStage, _setStage, None, "Stage configuration in terms of a C{StageConfig} object.") 4260 store = property(_getStore, _setStore, None, "Store configuration in terms of a C{StoreConfig} object.") 4261 purge = property(_getPurge, _setPurge, None, "Purge configuration in terms of a C{PurgeConfig} object.") 4262 4263 4264 ################# 4265 # Public methods 4266 ################# 4267
    4268 - def extractXml(self, xmlPath=None, validate=True):
    4269 """ 4270 Extracts configuration into an XML document. 4271 4272 If C{xmlPath} is not provided, then the XML document will be returned as 4273 a string. If C{xmlPath} is provided, then the XML document will be written 4274 to the file and C{None} will be returned. 4275 4276 Unless the C{validate} parameter is C{False}, the L{Config.validate} 4277 method will be called (with its default arguments) against the 4278 configuration before extracting the XML. If configuration is not valid, 4279 then an XML document will not be extracted. 4280 4281 @note: It is strongly suggested that the C{validate} option always be set 4282 to C{True} (the default) unless there is a specific need to write an 4283 invalid configuration file to disk. 4284 4285 @param xmlPath: Path to an XML file to create on disk. 4286 @type xmlPath: Absolute path to a file. 4287 4288 @param validate: Validate the document before extracting it. 4289 @type validate: Boolean true/false. 4290 4291 @return: XML string data or C{None} as described above. 4292 4293 @raise ValueError: If configuration within the object is not valid. 4294 @raise IOError: If there is an error writing to the file. 4295 @raise OSError: If there is an error writing to the file. 4296 """ 4297 if validate: 4298 self.validate() 4299 xmlData = self._extractXml() 4300 if xmlPath is not None: 4301 open(xmlPath, "w").write(xmlData) 4302 return None 4303 else: 4304 return xmlData
    4305
    4306 - def validate(self, requireOneAction=True, requireReference=False, requireExtensions=False, requireOptions=True, 4307 requireCollect=False, requireStage=False, requireStore=False, requirePurge=False, requirePeers=False):
    4308 """ 4309 Validates configuration represented by the object. 4310 4311 This method encapsulates all of the validations that should apply to a 4312 fully "complete" document but are not already taken care of by earlier 4313 validations. It also provides some extra convenience functionality which 4314 might be useful to some people. The process of validation is laid out in 4315 the I{Validation} section in the class notes (above). 4316 4317 @param requireOneAction: Require at least one of the collect, stage, store or purge sections. 4318 @param requireReference: Require the reference section. 4319 @param requireExtensions: Require the extensions section. 4320 @param requireOptions: Require the options section. 4321 @param requirePeers: Require the peers section. 4322 @param requireCollect: Require the collect section. 4323 @param requireStage: Require the stage section. 4324 @param requireStore: Require the store section. 4325 @param requirePurge: Require the purge section. 4326 4327 @raise ValueError: If one of the validations fails. 4328 """ 4329 if requireOneAction and (self.collect, self.stage, self.store, self.purge) == (None, None, None, None): 4330 raise ValueError("At least one of the collect, stage, store and purge sections is required.") 4331 if requireReference and self.reference is None: 4332 raise ValueError("The reference is section is required.") 4333 if requireExtensions and self.extensions is None: 4334 raise ValueError("The extensions is section is required.") 4335 if requireOptions and self.options is None: 4336 raise ValueError("The options is section is required.") 4337 if requirePeers and self.peers is None: 4338 raise ValueError("The peers is section is required.") 4339 if requireCollect and self.collect is None: 4340 raise ValueError("The collect is section is required.") 4341 if requireStage and self.stage is None: 4342 raise ValueError("The stage is section is required.") 4343 if requireStore and self.store is None: 4344 raise ValueError("The store is section is required.") 4345 if requirePurge and self.purge is None: 4346 raise ValueError("The purge is section is required.") 4347 self._validateContents()
    4348 4349 4350 ##################################### 4351 # High-level methods for parsing XML 4352 ##################################### 4353
    4354 - def _parseXmlData(self, xmlData):
    4355 """ 4356 Internal method to parse an XML string into the object. 4357 4358 This method parses the XML document into a DOM tree (C{xmlDom}) and then 4359 calls individual static methods to parse each of the individual 4360 configuration sections. 4361 4362 Most of the validation we do here has to do with whether the document can 4363 be parsed and whether any values which exist are valid. We don't do much 4364 validation as to whether required elements actually exist unless we have 4365 to to make sense of the document (instead, that's the job of the 4366 L{validate} method). 4367 4368 @param xmlData: XML data to be parsed 4369 @type xmlData: String data 4370 4371 @raise ValueError: If the XML cannot be successfully parsed. 4372 """ 4373 (xmlDom, parentNode) = createInputDom(xmlData) 4374 self._reference = Config._parseReference(parentNode) 4375 self._extensions = Config._parseExtensions(parentNode) 4376 self._options = Config._parseOptions(parentNode) 4377 self._peers = Config._parsePeers(parentNode) 4378 self._collect = Config._parseCollect(parentNode) 4379 self._stage = Config._parseStage(parentNode) 4380 self._store = Config._parseStore(parentNode) 4381 self._purge = Config._parsePurge(parentNode)
    4382 4383 @staticmethod
    4384 - def _parseReference(parentNode):
    4385 """ 4386 Parses a reference configuration section. 4387 4388 We read the following fields:: 4389 4390 author //cb_config/reference/author 4391 revision //cb_config/reference/revision 4392 description //cb_config/reference/description 4393 generator //cb_config/reference/generator 4394 4395 @param parentNode: Parent node to search beneath. 4396 4397 @return: C{ReferenceConfig} object or C{None} if the section does not exist. 4398 @raise ValueError: If some filled-in value is invalid. 4399 """ 4400 reference = None 4401 sectionNode = readFirstChild(parentNode, "reference") 4402 if sectionNode is not None: 4403 reference = ReferenceConfig() 4404 reference.author = readString(sectionNode, "author") 4405 reference.revision = readString(sectionNode, "revision") 4406 reference.description = readString(sectionNode, "description") 4407 reference.generator = readString(sectionNode, "generator") 4408 return reference
    4409 4410 @staticmethod
    4411 - def _parseExtensions(parentNode):
    4412 """ 4413 Parses an extensions configuration section. 4414 4415 We read the following fields:: 4416 4417 orderMode //cb_config/extensions/order_mode 4418 4419 We also read groups of the following items, one list element per item:: 4420 4421 name //cb_config/extensions/action/name 4422 module //cb_config/extensions/action/module 4423 function //cb_config/extensions/action/function 4424 index //cb_config/extensions/action/index 4425 dependencies //cb_config/extensions/action/depends 4426 4427 The extended actions are parsed by L{_parseExtendedActions}. 4428 4429 @param parentNode: Parent node to search beneath. 4430 4431 @return: C{ExtensionsConfig} object or C{None} if the section does not exist. 4432 @raise ValueError: If some filled-in value is invalid. 4433 """ 4434 extensions = None 4435 sectionNode = readFirstChild(parentNode, "extensions") 4436 if sectionNode is not None: 4437 extensions = ExtensionsConfig() 4438 extensions.orderMode = readString(sectionNode, "order_mode") 4439 extensions.actions = Config._parseExtendedActions(sectionNode) 4440 return extensions
    4441 4442 @staticmethod
    4443 - def _parseOptions(parentNode):
    4444 """ 4445 Parses a options configuration section. 4446 4447 We read the following fields:: 4448 4449 startingDay //cb_config/options/starting_day 4450 workingDir //cb_config/options/working_dir 4451 backupUser //cb_config/options/backup_user 4452 backupGroup //cb_config/options/backup_group 4453 rcpCommand //cb_config/options/rcp_command 4454 rshCommand //cb_config/options/rsh_command 4455 cbackCommand //cb_config/options/cback_command 4456 managedActions //cb_config/options/managed_actions 4457 4458 The list of managed actions is a comma-separated list of action names. 4459 4460 We also read groups of the following items, one list element per 4461 item:: 4462 4463 overrides //cb_config/options/override 4464 hooks //cb_config/options/hook 4465 4466 The overrides are parsed by L{_parseOverrides} and the hooks are parsed 4467 by L{_parseHooks}. 4468 4469 @param parentNode: Parent node to search beneath. 4470 4471 @return: C{OptionsConfig} object or C{None} if the section does not exist. 4472 @raise ValueError: If some filled-in value is invalid. 4473 """ 4474 options = None 4475 sectionNode = readFirstChild(parentNode, "options") 4476 if sectionNode is not None: 4477 options = OptionsConfig() 4478 options.startingDay = readString(sectionNode, "starting_day") 4479 options.workingDir = readString(sectionNode, "working_dir") 4480 options.backupUser = readString(sectionNode, "backup_user") 4481 options.backupGroup = readString(sectionNode, "backup_group") 4482 options.rcpCommand = readString(sectionNode, "rcp_command") 4483 options.rshCommand = readString(sectionNode, "rsh_command") 4484 options.cbackCommand = readString(sectionNode, "cback_command") 4485 options.overrides = Config._parseOverrides(sectionNode) 4486 options.hooks = Config._parseHooks(sectionNode) 4487 managedActions = readString(sectionNode, "managed_actions") 4488 options.managedActions = parseCommaSeparatedString(managedActions) 4489 return options
    4490 4491 @staticmethod
    4492 - def _parsePeers(parentNode):
    4493 """ 4494 Parses a peers configuration section. 4495 4496 We read groups of the following items, one list element per 4497 item:: 4498 4499 localPeers //cb_config/stage/peer 4500 remotePeers //cb_config/stage/peer 4501 4502 The individual peer entries are parsed by L{_parsePeerList}. 4503 4504 @param parentNode: Parent node to search beneath. 4505 4506 @return: C{StageConfig} object or C{None} if the section does not exist. 4507 @raise ValueError: If some filled-in value is invalid. 4508 """ 4509 peers = None 4510 sectionNode = readFirstChild(parentNode, "peers") 4511 if sectionNode is not None: 4512 peers = PeersConfig() 4513 (peers.localPeers, peers.remotePeers) = Config._parsePeerList(sectionNode) 4514 return peers
    4515 4516 @staticmethod
    4517 - def _parseCollect(parentNode):
    4518 """ 4519 Parses a collect configuration section. 4520 4521 We read the following individual fields:: 4522 4523 targetDir //cb_config/collect/collect_dir 4524 collectMode //cb_config/collect/collect_mode 4525 archiveMode //cb_config/collect/archive_mode 4526 ignoreFile //cb_config/collect/ignore_file 4527 4528 We also read groups of the following items, one list element per 4529 item:: 4530 4531 absoluteExcludePaths //cb_config/collect/exclude/abs_path 4532 excludePatterns //cb_config/collect/exclude/pattern 4533 collectFiles //cb_config/collect/file 4534 collectDirs //cb_config/collect/dir 4535 4536 The exclusions are parsed by L{_parseExclusions}, the collect files are 4537 parsed by L{_parseCollectFiles}, and the directories are parsed by 4538 L{_parseCollectDirs}. 4539 4540 @param parentNode: Parent node to search beneath. 4541 4542 @return: C{CollectConfig} object or C{None} if the section does not exist. 4543 @raise ValueError: If some filled-in value is invalid. 4544 """ 4545 collect = None 4546 sectionNode = readFirstChild(parentNode, "collect") 4547 if sectionNode is not None: 4548 collect = CollectConfig() 4549 collect.targetDir = readString(sectionNode, "collect_dir") 4550 collect.collectMode = readString(sectionNode, "collect_mode") 4551 collect.archiveMode = readString(sectionNode, "archive_mode") 4552 collect.ignoreFile = readString(sectionNode, "ignore_file") 4553 (collect.absoluteExcludePaths, unused, collect.excludePatterns) = Config._parseExclusions(sectionNode) 4554 collect.collectFiles = Config._parseCollectFiles(sectionNode) 4555 collect.collectDirs = Config._parseCollectDirs(sectionNode) 4556 return collect
    4557 4558 @staticmethod
    4559 - def _parseStage(parentNode):
    4560 """ 4561 Parses a stage configuration section. 4562 4563 We read the following individual fields:: 4564 4565 targetDir //cb_config/stage/staging_dir 4566 4567 We also read groups of the following items, one list element per 4568 item:: 4569 4570 localPeers //cb_config/stage/peer 4571 remotePeers //cb_config/stage/peer 4572 4573 The individual peer entries are parsed by L{_parsePeerList}. 4574 4575 @param parentNode: Parent node to search beneath. 4576 4577 @return: C{StageConfig} object or C{None} if the section does not exist. 4578 @raise ValueError: If some filled-in value is invalid. 4579 """ 4580 stage = None 4581 sectionNode = readFirstChild(parentNode, "stage") 4582 if sectionNode is not None: 4583 stage = StageConfig() 4584 stage.targetDir = readString(sectionNode, "staging_dir") 4585 (stage.localPeers, stage.remotePeers) = Config._parsePeerList(sectionNode) 4586 return stage
    4587 4588 @staticmethod
    4589 - def _parseStore(parentNode):
    4590 """ 4591 Parses a store configuration section. 4592 4593 We read the following fields:: 4594 4595 sourceDir //cb_config/store/source_dir 4596 mediaType //cb_config/store/media_type 4597 deviceType //cb_config/store/device_type 4598 devicePath //cb_config/store/target_device 4599 deviceScsiId //cb_config/store/target_scsi_id 4600 driveSpeed //cb_config/store/drive_speed 4601 checkData //cb_config/store/check_data 4602 checkMedia //cb_config/store/check_media 4603 warnMidnite //cb_config/store/warn_midnite 4604 noEject //cb_config/store/no_eject 4605 4606 Blanking behavior configuration is parsed by the C{_parseBlankBehavior} 4607 method. 4608 4609 @param parentNode: Parent node to search beneath. 4610 4611 @return: C{StoreConfig} object or C{None} if the section does not exist. 4612 @raise ValueError: If some filled-in value is invalid. 4613 """ 4614 store = None 4615 sectionNode = readFirstChild(parentNode, "store") 4616 if sectionNode is not None: 4617 store = StoreConfig() 4618 store.sourceDir = readString(sectionNode, "source_dir") 4619 store.mediaType = readString(sectionNode, "media_type") 4620 store.deviceType = readString(sectionNode, "device_type") 4621 store.devicePath = readString(sectionNode, "target_device") 4622 store.deviceScsiId = readString(sectionNode, "target_scsi_id") 4623 store.driveSpeed = readInteger(sectionNode, "drive_speed") 4624 store.checkData = readBoolean(sectionNode, "check_data") 4625 store.checkMedia = readBoolean(sectionNode, "check_media") 4626 store.warnMidnite = readBoolean(sectionNode, "warn_midnite") 4627 store.noEject = readBoolean(sectionNode, "no_eject") 4628 store.blankBehavior = Config._parseBlankBehavior(sectionNode) 4629 store.refreshMediaDelay = readInteger(sectionNode, "refresh_media_delay") 4630 store.ejectDelay = readInteger(sectionNode, "eject_delay") 4631 return store
    4632 4633 @staticmethod
    4634 - def _parsePurge(parentNode):
    4635 """ 4636 Parses a purge configuration section. 4637 4638 We read groups of the following items, one list element per 4639 item:: 4640 4641 purgeDirs //cb_config/purge/dir 4642 4643 The individual directory entries are parsed by L{_parsePurgeDirs}. 4644 4645 @param parentNode: Parent node to search beneath. 4646 4647 @return: C{PurgeConfig} object or C{None} if the section does not exist. 4648 @raise ValueError: If some filled-in value is invalid. 4649 """ 4650 purge = None 4651 sectionNode = readFirstChild(parentNode, "purge") 4652 if sectionNode is not None: 4653 purge = PurgeConfig() 4654 purge.purgeDirs = Config._parsePurgeDirs(sectionNode) 4655 return purge
    4656 4657 @staticmethod
    4658 - def _parseExtendedActions(parentNode):
    4659 """ 4660 Reads extended actions data from immediately beneath the parent. 4661 4662 We read the following individual fields from each extended action:: 4663 4664 name name 4665 module module 4666 function function 4667 index index 4668 dependencies depends 4669 4670 Dependency information is parsed by the C{_parseDependencies} method. 4671 4672 @param parentNode: Parent node to search beneath. 4673 4674 @return: List of extended actions. 4675 @raise ValueError: If the data at the location can't be read 4676 """ 4677 lst = [] 4678 for entry in readChildren(parentNode, "action"): 4679 if isElement(entry): 4680 action = ExtendedAction() 4681 action.name = readString(entry, "name") 4682 action.module = readString(entry, "module") 4683 action.function = readString(entry, "function") 4684 action.index = readInteger(entry, "index") 4685 action.dependencies = Config._parseDependencies(entry) 4686 lst.append(action) 4687 if lst == []: 4688 lst = None 4689 return lst
    4690 4691 @staticmethod
    4692 - def _parseExclusions(parentNode):
    4693 """ 4694 Reads exclusions data from immediately beneath the parent. 4695 4696 We read groups of the following items, one list element per item:: 4697 4698 absolute exclude/abs_path 4699 relative exclude/rel_path 4700 patterns exclude/pattern 4701 4702 If there are none of some pattern (i.e. no relative path items) then 4703 C{None} will be returned for that item in the tuple. 4704 4705 This method can be used to parse exclusions on both the collect 4706 configuration level and on the collect directory level within collect 4707 configuration. 4708 4709 @param parentNode: Parent node to search beneath. 4710 4711 @return: Tuple of (absolute, relative, patterns) exclusions. 4712 """ 4713 sectionNode = readFirstChild(parentNode, "exclude") 4714 if sectionNode is None: 4715 return (None, None, None) 4716 else: 4717 absolute = readStringList(sectionNode, "abs_path") 4718 relative = readStringList(sectionNode, "rel_path") 4719 patterns = readStringList(sectionNode, "pattern") 4720 return (absolute, relative, patterns)
    4721 4722 @staticmethod
    4723 - def _parseOverrides(parentNode):
    4724 """ 4725 Reads a list of C{CommandOverride} objects from immediately beneath the parent. 4726 4727 We read the following individual fields:: 4728 4729 command command 4730 absolutePath abs_path 4731 4732 @param parentNode: Parent node to search beneath. 4733 4734 @return: List of C{CommandOverride} objects or C{None} if none are found. 4735 @raise ValueError: If some filled-in value is invalid. 4736 """ 4737 lst = [] 4738 for entry in readChildren(parentNode, "override"): 4739 if isElement(entry): 4740 override = CommandOverride() 4741 override.command = readString(entry, "command") 4742 override.absolutePath = readString(entry, "abs_path") 4743 lst.append(override) 4744 if lst == []: 4745 lst = None 4746 return lst
    4747 4748 @staticmethod 4749 # pylint: disable=R0204
    4750 - def _parseHooks(parentNode):
    4751 """ 4752 Reads a list of C{ActionHook} objects from immediately beneath the parent. 4753 4754 We read the following individual fields:: 4755 4756 action action 4757 command command 4758 4759 @param parentNode: Parent node to search beneath. 4760 4761 @return: List of C{ActionHook} objects or C{None} if none are found. 4762 @raise ValueError: If some filled-in value is invalid. 4763 """ 4764 lst = [] 4765 for entry in readChildren(parentNode, "pre_action_hook"): 4766 if isElement(entry): 4767 hook = PreActionHook() 4768 hook.action = readString(entry, "action") 4769 hook.command = readString(entry, "command") 4770 lst.append(hook) 4771 for entry in readChildren(parentNode, "post_action_hook"): 4772 if isElement(entry): 4773 hook = PostActionHook() 4774 hook.action = readString(entry, "action") 4775 hook.command = readString(entry, "command") 4776 lst.append(hook) 4777 if lst == []: 4778 lst = None 4779 return lst
    4780 4781 @staticmethod
    4782 - def _parseCollectFiles(parentNode):
    4783 """ 4784 Reads a list of C{CollectFile} objects from immediately beneath the parent. 4785 4786 We read the following individual fields:: 4787 4788 absolutePath abs_path 4789 collectMode mode I{or} collect_mode 4790 archiveMode archive_mode 4791 4792 The collect mode is a special case. Just a C{mode} tag is accepted, but 4793 we prefer C{collect_mode} for consistency with the rest of the config 4794 file and to avoid confusion with the archive mode. If both are provided, 4795 only C{mode} will be used. 4796 4797 @param parentNode: Parent node to search beneath. 4798 4799 @return: List of C{CollectFile} objects or C{None} if none are found. 4800 @raise ValueError: If some filled-in value is invalid. 4801 """ 4802 lst = [] 4803 for entry in readChildren(parentNode, "file"): 4804 if isElement(entry): 4805 cfile = CollectFile() 4806 cfile.absolutePath = readString(entry, "abs_path") 4807 cfile.collectMode = readString(entry, "mode") 4808 if cfile.collectMode is None: 4809 cfile.collectMode = readString(entry, "collect_mode") 4810 cfile.archiveMode = readString(entry, "archive_mode") 4811 lst.append(cfile) 4812 if lst == []: 4813 lst = None 4814 return lst
    4815 4816 @staticmethod
    4817 - def _parseCollectDirs(parentNode):
    4818 """ 4819 Reads a list of C{CollectDir} objects from immediately beneath the parent. 4820 4821 We read the following individual fields:: 4822 4823 absolutePath abs_path 4824 collectMode mode I{or} collect_mode 4825 archiveMode archive_mode 4826 ignoreFile ignore_file 4827 linkDepth link_depth 4828 dereference dereference 4829 recursionLevel recursion_level 4830 4831 The collect mode is a special case. Just a C{mode} tag is accepted for 4832 backwards compatibility, but we prefer C{collect_mode} for consistency 4833 with the rest of the config file and to avoid confusion with the archive 4834 mode. If both are provided, only C{mode} will be used. 4835 4836 We also read groups of the following items, one list element per 4837 item:: 4838 4839 absoluteExcludePaths exclude/abs_path 4840 relativeExcludePaths exclude/rel_path 4841 excludePatterns exclude/pattern 4842 4843 The exclusions are parsed by L{_parseExclusions}. 4844 4845 @param parentNode: Parent node to search beneath. 4846 4847 @return: List of C{CollectDir} objects or C{None} if none are found. 4848 @raise ValueError: If some filled-in value is invalid. 4849 """ 4850 lst = [] 4851 for entry in readChildren(parentNode, "dir"): 4852 if isElement(entry): 4853 cdir = CollectDir() 4854 cdir.absolutePath = readString(entry, "abs_path") 4855 cdir.collectMode = readString(entry, "mode") 4856 if cdir.collectMode is None: 4857 cdir.collectMode = readString(entry, "collect_mode") 4858 cdir.archiveMode = readString(entry, "archive_mode") 4859 cdir.ignoreFile = readString(entry, "ignore_file") 4860 cdir.linkDepth = readInteger(entry, "link_depth") 4861 cdir.dereference = readBoolean(entry, "dereference") 4862 cdir.recursionLevel = readInteger(entry, "recursion_level") 4863 (cdir.absoluteExcludePaths, cdir.relativeExcludePaths, cdir.excludePatterns) = Config._parseExclusions(entry) 4864 lst.append(cdir) 4865 if lst == []: 4866 lst = None 4867 return lst
    4868 4869 @staticmethod
    4870 - def _parsePurgeDirs(parentNode):
    4871 """ 4872 Reads a list of C{PurgeDir} objects from immediately beneath the parent. 4873 4874 We read the following individual fields:: 4875 4876 absolutePath <baseExpr>/abs_path 4877 retainDays <baseExpr>/retain_days 4878 4879 @param parentNode: Parent node to search beneath. 4880 4881 @return: List of C{PurgeDir} objects or C{None} if none are found. 4882 @raise ValueError: If the data at the location can't be read 4883 """ 4884 lst = [] 4885 for entry in readChildren(parentNode, "dir"): 4886 if isElement(entry): 4887 cdir = PurgeDir() 4888 cdir.absolutePath = readString(entry, "abs_path") 4889 cdir.retainDays = readInteger(entry, "retain_days") 4890 lst.append(cdir) 4891 if lst == []: 4892 lst = None 4893 return lst
    4894 4895 @staticmethod
    4896 - def _parsePeerList(parentNode):
    4897 """ 4898 Reads remote and local peer data from immediately beneath the parent. 4899 4900 We read the following individual fields for both remote 4901 and local peers:: 4902 4903 name name 4904 collectDir collect_dir 4905 4906 We also read the following individual fields for remote peers 4907 only:: 4908 4909 remoteUser backup_user 4910 rcpCommand rcp_command 4911 rshCommand rsh_command 4912 cbackCommand cback_command 4913 managed managed 4914 managedActions managed_actions 4915 4916 Additionally, the value in the C{type} field is used to determine whether 4917 this entry is a remote peer. If the type is C{"remote"}, it's a remote 4918 peer, and if the type is C{"local"}, it's a remote peer. 4919 4920 If there are none of one type of peer (i.e. no local peers) then C{None} 4921 will be returned for that item in the tuple. 4922 4923 @param parentNode: Parent node to search beneath. 4924 4925 @return: Tuple of (local, remote) peer lists. 4926 @raise ValueError: If the data at the location can't be read 4927 """ 4928 localPeers = [] 4929 remotePeers = [] 4930 for entry in readChildren(parentNode, "peer"): 4931 if isElement(entry): 4932 peerType = readString(entry, "type") 4933 if peerType == "local": 4934 localPeer = LocalPeer() 4935 localPeer.name = readString(entry, "name") 4936 localPeer.collectDir = readString(entry, "collect_dir") 4937 localPeer.ignoreFailureMode = readString(entry, "ignore_failures") 4938 localPeers.append(localPeer) 4939 elif peerType == "remote": 4940 remotePeer = RemotePeer() 4941 remotePeer.name = readString(entry, "name") 4942 remotePeer.collectDir = readString(entry, "collect_dir") 4943 remotePeer.remoteUser = readString(entry, "backup_user") 4944 remotePeer.rcpCommand = readString(entry, "rcp_command") 4945 remotePeer.rshCommand = readString(entry, "rsh_command") 4946 remotePeer.cbackCommand = readString(entry, "cback_command") 4947 remotePeer.ignoreFailureMode = readString(entry, "ignore_failures") 4948 remotePeer.managed = readBoolean(entry, "managed") 4949 managedActions = readString(entry, "managed_actions") 4950 remotePeer.managedActions = parseCommaSeparatedString(managedActions) 4951 remotePeers.append(remotePeer) 4952 if localPeers == []: 4953 localPeers = None 4954 if remotePeers == []: 4955 remotePeers = None 4956 return (localPeers, remotePeers)
    4957 4958 @staticmethod
    4959 - def _parseDependencies(parentNode):
    4960 """ 4961 Reads extended action dependency information from a parent node. 4962 4963 We read the following individual fields:: 4964 4965 runBefore depends/run_before 4966 runAfter depends/run_after 4967 4968 Each of these fields is a comma-separated list of action names. 4969 4970 The result is placed into an C{ActionDependencies} object. 4971 4972 If the dependencies parent node does not exist, C{None} will be returned. 4973 Otherwise, an C{ActionDependencies} object will always be created, even 4974 if it does not contain any actual dependencies in it. 4975 4976 @param parentNode: Parent node to search beneath. 4977 4978 @return: C{ActionDependencies} object or C{None}. 4979 @raise ValueError: If the data at the location can't be read 4980 """ 4981 sectionNode = readFirstChild(parentNode, "depends") 4982 if sectionNode is None: 4983 return None 4984 else: 4985 runBefore = readString(sectionNode, "run_before") 4986 runAfter = readString(sectionNode, "run_after") 4987 beforeList = parseCommaSeparatedString(runBefore) 4988 afterList = parseCommaSeparatedString(runAfter) 4989 return ActionDependencies(beforeList, afterList)
    4990 4991 @staticmethod
    4992 - def _parseBlankBehavior(parentNode):
    4993 """ 4994 Reads a single C{BlankBehavior} object from immediately beneath the parent. 4995 4996 We read the following individual fields:: 4997 4998 blankMode blank_behavior/mode 4999 blankFactor blank_behavior/factor 5000 5001 @param parentNode: Parent node to search beneath. 5002 5003 @return: C{BlankBehavior} object or C{None} if none if the section is not found 5004 @raise ValueError: If some filled-in value is invalid. 5005 """ 5006 blankBehavior = None 5007 sectionNode = readFirstChild(parentNode, "blank_behavior") 5008 if sectionNode is not None: 5009 blankBehavior = BlankBehavior() 5010 blankBehavior.blankMode = readString(sectionNode, "mode") 5011 blankBehavior.blankFactor = readString(sectionNode, "factor") 5012 return blankBehavior
    5013 5014 5015 ######################################## 5016 # High-level methods for generating XML 5017 ######################################## 5018
    5019 - def _extractXml(self):
    5020 """ 5021 Internal method to extract configuration into an XML string. 5022 5023 This method assumes that the internal L{validate} method has been called 5024 prior to extracting the XML, if the caller cares. No validation will be 5025 done internally. 5026 5027 As a general rule, fields that are set to C{None} will be extracted into 5028 the document as empty tags. The same goes for container tags that are 5029 filled based on lists - if the list is empty or C{None}, the container 5030 tag will be empty. 5031 """ 5032 (xmlDom, parentNode) = createOutputDom() 5033 Config._addReference(xmlDom, parentNode, self.reference) 5034 Config._addExtensions(xmlDom, parentNode, self.extensions) 5035 Config._addOptions(xmlDom, parentNode, self.options) 5036 Config._addPeers(xmlDom, parentNode, self.peers) 5037 Config._addCollect(xmlDom, parentNode, self.collect) 5038 Config._addStage(xmlDom, parentNode, self.stage) 5039 Config._addStore(xmlDom, parentNode, self.store) 5040 Config._addPurge(xmlDom, parentNode, self.purge) 5041 xmlData = serializeDom(xmlDom) 5042 xmlDom.unlink() 5043 return xmlData
    5044 5045 @staticmethod
    5046 - def _addReference(xmlDom, parentNode, referenceConfig):
    5047 """ 5048 Adds a <reference> configuration section as the next child of a parent. 5049 5050 We add the following fields to the document:: 5051 5052 author //cb_config/reference/author 5053 revision //cb_config/reference/revision 5054 description //cb_config/reference/description 5055 generator //cb_config/reference/generator 5056 5057 If C{referenceConfig} is C{None}, then no container will be added. 5058 5059 @param xmlDom: DOM tree as from L{createOutputDom}. 5060 @param parentNode: Parent that the section should be appended to. 5061 @param referenceConfig: Reference configuration section to be added to the document. 5062 """ 5063 if referenceConfig is not None: 5064 sectionNode = addContainerNode(xmlDom, parentNode, "reference") 5065 addStringNode(xmlDom, sectionNode, "author", referenceConfig.author) 5066 addStringNode(xmlDom, sectionNode, "revision", referenceConfig.revision) 5067 addStringNode(xmlDom, sectionNode, "description", referenceConfig.description) 5068 addStringNode(xmlDom, sectionNode, "generator", referenceConfig.generator)
    5069 5070 @staticmethod
    5071 - def _addExtensions(xmlDom, parentNode, extensionsConfig):
    5072 """ 5073 Adds an <extensions> configuration section as the next child of a parent. 5074 5075 We add the following fields to the document:: 5076 5077 order_mode //cb_config/extensions/order_mode 5078 5079 We also add groups of the following items, one list element per item:: 5080 5081 actions //cb_config/extensions/action 5082 5083 The extended action entries are added by L{_addExtendedAction}. 5084 5085 If C{extensionsConfig} is C{None}, then no container will be added. 5086 5087 @param xmlDom: DOM tree as from L{createOutputDom}. 5088 @param parentNode: Parent that the section should be appended to. 5089 @param extensionsConfig: Extensions configuration section to be added to the document. 5090 """ 5091 if extensionsConfig is not None: 5092 sectionNode = addContainerNode(xmlDom, parentNode, "extensions") 5093 addStringNode(xmlDom, sectionNode, "order_mode", extensionsConfig.orderMode) 5094 if extensionsConfig.actions is not None: 5095 for action in extensionsConfig.actions: 5096 Config._addExtendedAction(xmlDom, sectionNode, action)
    5097 5098 @staticmethod
    5099 - def _addOptions(xmlDom, parentNode, optionsConfig):
    5100 """ 5101 Adds a <options> configuration section as the next child of a parent. 5102 5103 We add the following fields to the document:: 5104 5105 startingDay //cb_config/options/starting_day 5106 workingDir //cb_config/options/working_dir 5107 backupUser //cb_config/options/backup_user 5108 backupGroup //cb_config/options/backup_group 5109 rcpCommand //cb_config/options/rcp_command 5110 rshCommand //cb_config/options/rsh_command 5111 cbackCommand //cb_config/options/cback_command 5112 managedActions //cb_config/options/managed_actions 5113 5114 We also add groups of the following items, one list element per 5115 item:: 5116 5117 overrides //cb_config/options/override 5118 hooks //cb_config/options/pre_action_hook 5119 hooks //cb_config/options/post_action_hook 5120 5121 The individual override items are added by L{_addOverride}. The 5122 individual hook items are added by L{_addHook}. 5123 5124 If C{optionsConfig} is C{None}, then no container will be added. 5125 5126 @param xmlDom: DOM tree as from L{createOutputDom}. 5127 @param parentNode: Parent that the section should be appended to. 5128 @param optionsConfig: Options configuration section to be added to the document. 5129 """ 5130 if optionsConfig is not None: 5131 sectionNode = addContainerNode(xmlDom, parentNode, "options") 5132 addStringNode(xmlDom, sectionNode, "starting_day", optionsConfig.startingDay) 5133 addStringNode(xmlDom, sectionNode, "working_dir", optionsConfig.workingDir) 5134 addStringNode(xmlDom, sectionNode, "backup_user", optionsConfig.backupUser) 5135 addStringNode(xmlDom, sectionNode, "backup_group", optionsConfig.backupGroup) 5136 addStringNode(xmlDom, sectionNode, "rcp_command", optionsConfig.rcpCommand) 5137 addStringNode(xmlDom, sectionNode, "rsh_command", optionsConfig.rshCommand) 5138 addStringNode(xmlDom, sectionNode, "cback_command", optionsConfig.cbackCommand) 5139 managedActions = Config._buildCommaSeparatedString(optionsConfig.managedActions) 5140 addStringNode(xmlDom, sectionNode, "managed_actions", managedActions) 5141 if optionsConfig.overrides is not None: 5142 for override in optionsConfig.overrides: 5143 Config._addOverride(xmlDom, sectionNode, override) 5144 if optionsConfig.hooks is not None: 5145 for hook in optionsConfig.hooks: 5146 Config._addHook(xmlDom, sectionNode, hook)
    5147 5148 @staticmethod
    5149 - def _addPeers(xmlDom, parentNode, peersConfig):
    5150 """ 5151 Adds a <peers> configuration section as the next child of a parent. 5152 5153 We add groups of the following items, one list element per 5154 item:: 5155 5156 localPeers //cb_config/peers/peer 5157 remotePeers //cb_config/peers/peer 5158 5159 The individual local and remote peer entries are added by 5160 L{_addLocalPeer} and L{_addRemotePeer}, respectively. 5161 5162 If C{peersConfig} is C{None}, then no container will be added. 5163 5164 @param xmlDom: DOM tree as from L{createOutputDom}. 5165 @param parentNode: Parent that the section should be appended to. 5166 @param peersConfig: Peers configuration section to be added to the document. 5167 """ 5168 if peersConfig is not None: 5169 sectionNode = addContainerNode(xmlDom, parentNode, "peers") 5170 if peersConfig.localPeers is not None: 5171 for localPeer in peersConfig.localPeers: 5172 Config._addLocalPeer(xmlDom, sectionNode, localPeer) 5173 if peersConfig.remotePeers is not None: 5174 for remotePeer in peersConfig.remotePeers: 5175 Config._addRemotePeer(xmlDom, sectionNode, remotePeer)
    5176 5177 @staticmethod
    5178 - def _addCollect(xmlDom, parentNode, collectConfig):
    5179 """ 5180 Adds a <collect> configuration section as the next child of a parent. 5181 5182 We add the following fields to the document:: 5183 5184 targetDir //cb_config/collect/collect_dir 5185 collectMode //cb_config/collect/collect_mode 5186 archiveMode //cb_config/collect/archive_mode 5187 ignoreFile //cb_config/collect/ignore_file 5188 5189 We also add groups of the following items, one list element per 5190 item:: 5191 5192 absoluteExcludePaths //cb_config/collect/exclude/abs_path 5193 excludePatterns //cb_config/collect/exclude/pattern 5194 collectFiles //cb_config/collect/file 5195 collectDirs //cb_config/collect/dir 5196 5197 The individual collect files are added by L{_addCollectFile} and 5198 individual collect directories are added by L{_addCollectDir}. 5199 5200 If C{collectConfig} is C{None}, then no container will be added. 5201 5202 @param xmlDom: DOM tree as from L{createOutputDom}. 5203 @param parentNode: Parent that the section should be appended to. 5204 @param collectConfig: Collect configuration section to be added to the document. 5205 """ 5206 if collectConfig is not None: 5207 sectionNode = addContainerNode(xmlDom, parentNode, "collect") 5208 addStringNode(xmlDom, sectionNode, "collect_dir", collectConfig.targetDir) 5209 addStringNode(xmlDom, sectionNode, "collect_mode", collectConfig.collectMode) 5210 addStringNode(xmlDom, sectionNode, "archive_mode", collectConfig.archiveMode) 5211 addStringNode(xmlDom, sectionNode, "ignore_file", collectConfig.ignoreFile) 5212 if ((collectConfig.absoluteExcludePaths is not None and collectConfig.absoluteExcludePaths != []) or 5213 (collectConfig.excludePatterns is not None and collectConfig.excludePatterns != [])): 5214 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 5215 if collectConfig.absoluteExcludePaths is not None: 5216 for absolutePath in collectConfig.absoluteExcludePaths: 5217 addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) 5218 if collectConfig.excludePatterns is not None: 5219 for pattern in collectConfig.excludePatterns: 5220 addStringNode(xmlDom, excludeNode, "pattern", pattern) 5221 if collectConfig.collectFiles is not None: 5222 for collectFile in collectConfig.collectFiles: 5223 Config._addCollectFile(xmlDom, sectionNode, collectFile) 5224 if collectConfig.collectDirs is not None: 5225 for collectDir in collectConfig.collectDirs: 5226 Config._addCollectDir(xmlDom, sectionNode, collectDir)
    5227 5228 @staticmethod
    5229 - def _addStage(xmlDom, parentNode, stageConfig):
    5230 """ 5231 Adds a <stage> configuration section as the next child of a parent. 5232 5233 We add the following fields to the document:: 5234 5235 targetDir //cb_config/stage/staging_dir 5236 5237 We also add groups of the following items, one list element per 5238 item:: 5239 5240 localPeers //cb_config/stage/peer 5241 remotePeers //cb_config/stage/peer 5242 5243 The individual local and remote peer entries are added by 5244 L{_addLocalPeer} and L{_addRemotePeer}, respectively. 5245 5246 If C{stageConfig} is C{None}, then no container will be added. 5247 5248 @param xmlDom: DOM tree as from L{createOutputDom}. 5249 @param parentNode: Parent that the section should be appended to. 5250 @param stageConfig: Stage configuration section to be added to the document. 5251 """ 5252 if stageConfig is not None: 5253 sectionNode = addContainerNode(xmlDom, parentNode, "stage") 5254 addStringNode(xmlDom, sectionNode, "staging_dir", stageConfig.targetDir) 5255 if stageConfig.localPeers is not None: 5256 for localPeer in stageConfig.localPeers: 5257 Config._addLocalPeer(xmlDom, sectionNode, localPeer) 5258 if stageConfig.remotePeers is not None: 5259 for remotePeer in stageConfig.remotePeers: 5260 Config._addRemotePeer(xmlDom, sectionNode, remotePeer)
    5261 5262 @staticmethod
    5263 - def _addStore(xmlDom, parentNode, storeConfig):
    5264 """ 5265 Adds a <store> configuration section as the next child of a parent. 5266 5267 We add the following fields to the document:: 5268 5269 sourceDir //cb_config/store/source_dir 5270 mediaType //cb_config/store/media_type 5271 deviceType //cb_config/store/device_type 5272 devicePath //cb_config/store/target_device 5273 deviceScsiId //cb_config/store/target_scsi_id 5274 driveSpeed //cb_config/store/drive_speed 5275 checkData //cb_config/store/check_data 5276 checkMedia //cb_config/store/check_media 5277 warnMidnite //cb_config/store/warn_midnite 5278 noEject //cb_config/store/no_eject 5279 refreshMediaDelay //cb_config/store/refresh_media_delay 5280 ejectDelay //cb_config/store/eject_delay 5281 5282 Blanking behavior configuration is added by the L{_addBlankBehavior} 5283 method. 5284 5285 If C{storeConfig} is C{None}, then no container will be added. 5286 5287 @param xmlDom: DOM tree as from L{createOutputDom}. 5288 @param parentNode: Parent that the section should be appended to. 5289 @param storeConfig: Store configuration section to be added to the document. 5290 """ 5291 if storeConfig is not None: 5292 sectionNode = addContainerNode(xmlDom, parentNode, "store") 5293 addStringNode(xmlDom, sectionNode, "source_dir", storeConfig.sourceDir) 5294 addStringNode(xmlDom, sectionNode, "media_type", storeConfig.mediaType) 5295 addStringNode(xmlDom, sectionNode, "device_type", storeConfig.deviceType) 5296 addStringNode(xmlDom, sectionNode, "target_device", storeConfig.devicePath) 5297 addStringNode(xmlDom, sectionNode, "target_scsi_id", storeConfig.deviceScsiId) 5298 addIntegerNode(xmlDom, sectionNode, "drive_speed", storeConfig.driveSpeed) 5299 addBooleanNode(xmlDom, sectionNode, "check_data", storeConfig.checkData) 5300 addBooleanNode(xmlDom, sectionNode, "check_media", storeConfig.checkMedia) 5301 addBooleanNode(xmlDom, sectionNode, "warn_midnite", storeConfig.warnMidnite) 5302 addBooleanNode(xmlDom, sectionNode, "no_eject", storeConfig.noEject) 5303 addIntegerNode(xmlDom, sectionNode, "refresh_media_delay", storeConfig.refreshMediaDelay) 5304 addIntegerNode(xmlDom, sectionNode, "eject_delay", storeConfig.ejectDelay) 5305 Config._addBlankBehavior(xmlDom, sectionNode, storeConfig.blankBehavior)
    5306 5307 @staticmethod
    5308 - def _addPurge(xmlDom, parentNode, purgeConfig):
    5309 """ 5310 Adds a <purge> configuration section as the next child of a parent. 5311 5312 We add the following fields to the document:: 5313 5314 purgeDirs //cb_config/purge/dir 5315 5316 The individual directory entries are added by L{_addPurgeDir}. 5317 5318 If C{purgeConfig} is C{None}, then no container will be added. 5319 5320 @param xmlDom: DOM tree as from L{createOutputDom}. 5321 @param parentNode: Parent that the section should be appended to. 5322 @param purgeConfig: Purge configuration section to be added to the document. 5323 """ 5324 if purgeConfig is not None: 5325 sectionNode = addContainerNode(xmlDom, parentNode, "purge") 5326 if purgeConfig.purgeDirs is not None: 5327 for purgeDir in purgeConfig.purgeDirs: 5328 Config._addPurgeDir(xmlDom, sectionNode, purgeDir)
    5329 5330 @staticmethod
    5331 - def _addExtendedAction(xmlDom, parentNode, action):
    5332 """ 5333 Adds an extended action container as the next child of a parent. 5334 5335 We add the following fields to the document:: 5336 5337 name action/name 5338 module action/module 5339 function action/function 5340 index action/index 5341 dependencies action/depends 5342 5343 Dependencies are added by the L{_addDependencies} method. 5344 5345 The <action> node itself is created as the next child of the parent node. 5346 This method only adds one action node. The parent must loop for each action 5347 in the C{ExtensionsConfig} object. 5348 5349 If C{action} is C{None}, this method call will be a no-op. 5350 5351 @param xmlDom: DOM tree as from L{createOutputDom}. 5352 @param parentNode: Parent that the section should be appended to. 5353 @param action: Purge directory to be added to the document. 5354 """ 5355 if action is not None: 5356 sectionNode = addContainerNode(xmlDom, parentNode, "action") 5357 addStringNode(xmlDom, sectionNode, "name", action.name) 5358 addStringNode(xmlDom, sectionNode, "module", action.module) 5359 addStringNode(xmlDom, sectionNode, "function", action.function) 5360 addIntegerNode(xmlDom, sectionNode, "index", action.index) 5361 Config._addDependencies(xmlDom, sectionNode, action.dependencies)
    5362 5363 @staticmethod
    5364 - def _addOverride(xmlDom, parentNode, override):
    5365 """ 5366 Adds a command override container as the next child of a parent. 5367 5368 We add the following fields to the document:: 5369 5370 command override/command 5371 absolutePath override/abs_path 5372 5373 The <override> node itself is created as the next child of the parent 5374 node. This method only adds one override node. The parent must loop for 5375 each override in the C{OptionsConfig} object. 5376 5377 If C{override} is C{None}, this method call will be a no-op. 5378 5379 @param xmlDom: DOM tree as from L{createOutputDom}. 5380 @param parentNode: Parent that the section should be appended to. 5381 @param override: Command override to be added to the document. 5382 """ 5383 if override is not None: 5384 sectionNode = addContainerNode(xmlDom, parentNode, "override") 5385 addStringNode(xmlDom, sectionNode, "command", override.command) 5386 addStringNode(xmlDom, sectionNode, "abs_path", override.absolutePath)
    5387 5388 @staticmethod
    5389 - def _addHook(xmlDom, parentNode, hook):
    5390 """ 5391 Adds an action hook container as the next child of a parent. 5392 5393 The behavior varies depending on the value of the C{before} and C{after} 5394 flags on the hook. If the C{before} flag is set, it's a pre-action hook, 5395 and we'll add the following fields:: 5396 5397 action pre_action_hook/action 5398 command pre_action_hook/command 5399 5400 If the C{after} flag is set, it's a post-action hook, and we'll add the 5401 following fields:: 5402 5403 action post_action_hook/action 5404 command post_action_hook/command 5405 5406 The <pre_action_hook> or <post_action_hook> node itself is created as the 5407 next child of the parent node. This method only adds one hook node. The 5408 parent must loop for each hook in the C{OptionsConfig} object. 5409 5410 If C{hook} is C{None}, this method call will be a no-op. 5411 5412 @param xmlDom: DOM tree as from L{createOutputDom}. 5413 @param parentNode: Parent that the section should be appended to. 5414 @param hook: Command hook to be added to the document. 5415 """ 5416 if hook is not None: 5417 if hook.before: 5418 sectionNode = addContainerNode(xmlDom, parentNode, "pre_action_hook") 5419 else: 5420 sectionNode = addContainerNode(xmlDom, parentNode, "post_action_hook") 5421 addStringNode(xmlDom, sectionNode, "action", hook.action) 5422 addStringNode(xmlDom, sectionNode, "command", hook.command)
    5423 5424 @staticmethod
    5425 - def _addCollectFile(xmlDom, parentNode, collectFile):
    5426 """ 5427 Adds a collect file container as the next child of a parent. 5428 5429 We add the following fields to the document:: 5430 5431 absolutePath dir/abs_path 5432 collectMode dir/collect_mode 5433 archiveMode dir/archive_mode 5434 5435 Note that for consistency with collect directory handling we'll only emit 5436 the preferred C{collect_mode} tag. 5437 5438 The <file> node itself is created as the next child of the parent node. 5439 This method only adds one collect file node. The parent must loop 5440 for each collect file in the C{CollectConfig} object. 5441 5442 If C{collectFile} is C{None}, this method call will be a no-op. 5443 5444 @param xmlDom: DOM tree as from L{createOutputDom}. 5445 @param parentNode: Parent that the section should be appended to. 5446 @param collectFile: Collect file to be added to the document. 5447 """ 5448 if collectFile is not None: 5449 sectionNode = addContainerNode(xmlDom, parentNode, "file") 5450 addStringNode(xmlDom, sectionNode, "abs_path", collectFile.absolutePath) 5451 addStringNode(xmlDom, sectionNode, "collect_mode", collectFile.collectMode) 5452 addStringNode(xmlDom, sectionNode, "archive_mode", collectFile.archiveMode)
    5453 5454 @staticmethod
    5455 - def _addCollectDir(xmlDom, parentNode, collectDir):
    5456 """ 5457 Adds a collect directory container as the next child of a parent. 5458 5459 We add the following fields to the document:: 5460 5461 absolutePath dir/abs_path 5462 collectMode dir/collect_mode 5463 archiveMode dir/archive_mode 5464 ignoreFile dir/ignore_file 5465 linkDepth dir/link_depth 5466 dereference dir/dereference 5467 recursionLevel dir/recursion_level 5468 5469 Note that an original XML document might have listed the collect mode 5470 using the C{mode} tag, since we accept both C{collect_mode} and C{mode}. 5471 However, here we'll only emit the preferred C{collect_mode} tag. 5472 5473 We also add groups of the following items, one list element per item:: 5474 5475 absoluteExcludePaths dir/exclude/abs_path 5476 relativeExcludePaths dir/exclude/rel_path 5477 excludePatterns dir/exclude/pattern 5478 5479 The <dir> node itself is created as the next child of the parent node. 5480 This method only adds one collect directory node. The parent must loop 5481 for each collect directory in the C{CollectConfig} object. 5482 5483 If C{collectDir} is C{None}, this method call will be a no-op. 5484 5485 @param xmlDom: DOM tree as from L{createOutputDom}. 5486 @param parentNode: Parent that the section should be appended to. 5487 @param collectDir: Collect directory to be added to the document. 5488 """ 5489 if collectDir is not None: 5490 sectionNode = addContainerNode(xmlDom, parentNode, "dir") 5491 addStringNode(xmlDom, sectionNode, "abs_path", collectDir.absolutePath) 5492 addStringNode(xmlDom, sectionNode, "collect_mode", collectDir.collectMode) 5493 addStringNode(xmlDom, sectionNode, "archive_mode", collectDir.archiveMode) 5494 addStringNode(xmlDom, sectionNode, "ignore_file", collectDir.ignoreFile) 5495 addIntegerNode(xmlDom, sectionNode, "link_depth", collectDir.linkDepth) 5496 addBooleanNode(xmlDom, sectionNode, "dereference", collectDir.dereference) 5497 addIntegerNode(xmlDom, sectionNode, "recursion_level", collectDir.recursionLevel) 5498 if ((collectDir.absoluteExcludePaths is not None and collectDir.absoluteExcludePaths != []) or 5499 (collectDir.relativeExcludePaths is not None and collectDir.relativeExcludePaths != []) or 5500 (collectDir.excludePatterns is not None and collectDir.excludePatterns != [])): 5501 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 5502 if collectDir.absoluteExcludePaths is not None: 5503 for absolutePath in collectDir.absoluteExcludePaths: 5504 addStringNode(xmlDom, excludeNode, "abs_path", absolutePath) 5505 if collectDir.relativeExcludePaths is not None: 5506 for relativePath in collectDir.relativeExcludePaths: 5507 addStringNode(xmlDom, excludeNode, "rel_path", relativePath) 5508 if collectDir.excludePatterns is not None: 5509 for pattern in collectDir.excludePatterns: 5510 addStringNode(xmlDom, excludeNode, "pattern", pattern)
    5511 5512 @staticmethod
    5513 - def _addLocalPeer(xmlDom, parentNode, localPeer):
    5514 """ 5515 Adds a local peer container as the next child of a parent. 5516 5517 We add the following fields to the document:: 5518 5519 name peer/name 5520 collectDir peer/collect_dir 5521 ignoreFailureMode peer/ignore_failures 5522 5523 Additionally, C{peer/type} is filled in with C{"local"}, since this is a 5524 local peer. 5525 5526 The <peer> node itself is created as the next child of the parent node. 5527 This method only adds one peer node. The parent must loop for each peer 5528 in the C{StageConfig} object. 5529 5530 If C{localPeer} is C{None}, this method call will be a no-op. 5531 5532 @param xmlDom: DOM tree as from L{createOutputDom}. 5533 @param parentNode: Parent that the section should be appended to. 5534 @param localPeer: Purge directory to be added to the document. 5535 """ 5536 if localPeer is not None: 5537 sectionNode = addContainerNode(xmlDom, parentNode, "peer") 5538 addStringNode(xmlDom, sectionNode, "name", localPeer.name) 5539 addStringNode(xmlDom, sectionNode, "type", "local") 5540 addStringNode(xmlDom, sectionNode, "collect_dir", localPeer.collectDir) 5541 addStringNode(xmlDom, sectionNode, "ignore_failures", localPeer.ignoreFailureMode)
    5542 5543 @staticmethod
    5544 - def _addRemotePeer(xmlDom, parentNode, remotePeer):
    5545 """ 5546 Adds a remote peer container as the next child of a parent. 5547 5548 We add the following fields to the document:: 5549 5550 name peer/name 5551 collectDir peer/collect_dir 5552 remoteUser peer/backup_user 5553 rcpCommand peer/rcp_command 5554 rcpCommand peer/rcp_command 5555 rshCommand peer/rsh_command 5556 cbackCommand peer/cback_command 5557 ignoreFailureMode peer/ignore_failures 5558 managed peer/managed 5559 managedActions peer/managed_actions 5560 5561 Additionally, C{peer/type} is filled in with C{"remote"}, since this is a 5562 remote peer. 5563 5564 The <peer> node itself is created as the next child of the parent node. 5565 This method only adds one peer node. The parent must loop for each peer 5566 in the C{StageConfig} object. 5567 5568 If C{remotePeer} is C{None}, this method call will be a no-op. 5569 5570 @param xmlDom: DOM tree as from L{createOutputDom}. 5571 @param parentNode: Parent that the section should be appended to. 5572 @param remotePeer: Purge directory to be added to the document. 5573 """ 5574 if remotePeer is not None: 5575 sectionNode = addContainerNode(xmlDom, parentNode, "peer") 5576 addStringNode(xmlDom, sectionNode, "name", remotePeer.name) 5577 addStringNode(xmlDom, sectionNode, "type", "remote") 5578 addStringNode(xmlDom, sectionNode, "collect_dir", remotePeer.collectDir) 5579 addStringNode(xmlDom, sectionNode, "backup_user", remotePeer.remoteUser) 5580 addStringNode(xmlDom, sectionNode, "rcp_command", remotePeer.rcpCommand) 5581 addStringNode(xmlDom, sectionNode, "rsh_command", remotePeer.rshCommand) 5582 addStringNode(xmlDom, sectionNode, "cback_command", remotePeer.cbackCommand) 5583 addStringNode(xmlDom, sectionNode, "ignore_failures", remotePeer.ignoreFailureMode) 5584 addBooleanNode(xmlDom, sectionNode, "managed", remotePeer.managed) 5585 managedActions = Config._buildCommaSeparatedString(remotePeer.managedActions) 5586 addStringNode(xmlDom, sectionNode, "managed_actions", managedActions)
    5587 5588 @staticmethod
    5589 - def _addPurgeDir(xmlDom, parentNode, purgeDir):
    5590 """ 5591 Adds a purge directory container as the next child of a parent. 5592 5593 We add the following fields to the document:: 5594 5595 absolutePath dir/abs_path 5596 retainDays dir/retain_days 5597 5598 The <dir> node itself is created as the next child of the parent node. 5599 This method only adds one purge directory node. The parent must loop for 5600 each purge directory in the C{PurgeConfig} object. 5601 5602 If C{purgeDir} is C{None}, this method call will be a no-op. 5603 5604 @param xmlDom: DOM tree as from L{createOutputDom}. 5605 @param parentNode: Parent that the section should be appended to. 5606 @param purgeDir: Purge directory to be added to the document. 5607 """ 5608 if purgeDir is not None: 5609 sectionNode = addContainerNode(xmlDom, parentNode, "dir") 5610 addStringNode(xmlDom, sectionNode, "abs_path", purgeDir.absolutePath) 5611 addIntegerNode(xmlDom, sectionNode, "retain_days", purgeDir.retainDays)
    5612 5613 @staticmethod
    5614 - def _addDependencies(xmlDom, parentNode, dependencies):
    5615 """ 5616 Adds a extended action dependencies to parent node. 5617 5618 We add the following fields to the document:: 5619 5620 runBefore depends/run_before 5621 runAfter depends/run_after 5622 5623 If C{dependencies} is C{None}, this method call will be a no-op. 5624 5625 @param xmlDom: DOM tree as from L{createOutputDom}. 5626 @param parentNode: Parent that the section should be appended to. 5627 @param dependencies: C{ActionDependencies} object to be added to the document 5628 """ 5629 if dependencies is not None: 5630 sectionNode = addContainerNode(xmlDom, parentNode, "depends") 5631 runBefore = Config._buildCommaSeparatedString(dependencies.beforeList) 5632 runAfter = Config._buildCommaSeparatedString(dependencies.afterList) 5633 addStringNode(xmlDom, sectionNode, "run_before", runBefore) 5634 addStringNode(xmlDom, sectionNode, "run_after", runAfter)
    5635 5636 @staticmethod
    5637 - def _buildCommaSeparatedString(valueList):
    5638 """ 5639 Creates a comma-separated string from a list of values. 5640 5641 As a special case, if C{valueList} is C{None}, then C{None} will be 5642 returned. 5643 5644 @param valueList: List of values to be placed into a string 5645 5646 @return: Values from valueList as a comma-separated string. 5647 """ 5648 if valueList is None: 5649 return None 5650 return ",".join(valueList)
    5651 5652 @staticmethod
    5653 - def _addBlankBehavior(xmlDom, parentNode, blankBehavior):
    5654 """ 5655 Adds a blanking behavior container as the next child of a parent. 5656 5657 We add the following fields to the document:: 5658 5659 blankMode blank_behavior/mode 5660 blankFactor blank_behavior/factor 5661 5662 The <blank_behavior> node itself is created as the next child of the 5663 parent node. 5664 5665 If C{blankBehavior} is C{None}, this method call will be a no-op. 5666 5667 @param xmlDom: DOM tree as from L{createOutputDom}. 5668 @param parentNode: Parent that the section should be appended to. 5669 @param blankBehavior: Blanking behavior to be added to the document. 5670 """ 5671 if blankBehavior is not None: 5672 sectionNode = addContainerNode(xmlDom, parentNode, "blank_behavior") 5673 addStringNode(xmlDom, sectionNode, "mode", blankBehavior.blankMode) 5674 addStringNode(xmlDom, sectionNode, "factor", blankBehavior.blankFactor)
    5675 5676 5677 ################################################# 5678 # High-level methods used for validating content 5679 ################################################# 5680
    5681 - def _validateContents(self):
    5682 """ 5683 Validates configuration contents per rules discussed in module 5684 documentation. 5685 5686 This is the second pass at validation. It ensures that any filled-in 5687 section contains valid data. Any sections which is not set to C{None} is 5688 validated per the rules for that section, laid out in the module 5689 documentation (above). 5690 5691 @raise ValueError: If configuration is invalid. 5692 """ 5693 self._validateReference() 5694 self._validateExtensions() 5695 self._validateOptions() 5696 self._validatePeers() 5697 self._validateCollect() 5698 self._validateStage() 5699 self._validateStore() 5700 self._validatePurge()
    5701
    5702 - def _validateReference(self):
    5703 """ 5704 Validates reference configuration. 5705 There are currently no reference-related validations. 5706 @raise ValueError: If reference configuration is invalid. 5707 """ 5708 pass
    5709
    5710 - def _validateExtensions(self):
    5711 """ 5712 Validates extensions configuration. 5713 5714 The list of actions may be either C{None} or an empty list C{[]} if 5715 desired. Each extended action must include a name, a module, and a 5716 function. 5717 5718 Then, if the order mode is None or "index", an index is required; and if 5719 the order mode is "dependency", dependency information is required. 5720 5721 @raise ValueError: If reference configuration is invalid. 5722 """ 5723 if self.extensions is not None: 5724 if self.extensions.actions is not None: 5725 names = [] 5726 for action in self.extensions.actions: 5727 if action.name is None: 5728 raise ValueError("Each extended action must set a name.") 5729 names.append(action.name) 5730 if action.module is None: 5731 raise ValueError("Each extended action must set a module.") 5732 if action.function is None: 5733 raise ValueError("Each extended action must set a function.") 5734 if self.extensions.orderMode is None or self.extensions.orderMode == "index": 5735 if action.index is None: 5736 raise ValueError("Each extended action must set an index, based on order mode.") 5737 elif self.extensions.orderMode == "dependency": 5738 if action.dependencies is None: 5739 raise ValueError("Each extended action must set dependency information, based on order mode.") 5740 checkUnique("Duplicate extension names exist:", names)
    5741
    5742 - def _validateOptions(self):
    5743 """ 5744 Validates options configuration. 5745 5746 All fields must be filled in except the rsh command. The rcp and rsh 5747 commands are used as default values for all remote peers. Remote peers 5748 can also rely on the backup user as the default remote user name if they 5749 choose. 5750 5751 @raise ValueError: If reference configuration is invalid. 5752 """ 5753 if self.options is not None: 5754 if self.options.startingDay is None: 5755 raise ValueError("Options section starting day must be filled in.") 5756 if self.options.workingDir is None: 5757 raise ValueError("Options section working directory must be filled in.") 5758 if self.options.backupUser is None: 5759 raise ValueError("Options section backup user must be filled in.") 5760 if self.options.backupGroup is None: 5761 raise ValueError("Options section backup group must be filled in.") 5762 if self.options.rcpCommand is None: 5763 raise ValueError("Options section remote copy command must be filled in.")
    5764
    5765 - def _validatePeers(self):
    5766 """ 5767 Validates peers configuration per rules in L{_validatePeerList}. 5768 @raise ValueError: If peers configuration is invalid. 5769 """ 5770 if self.peers is not None: 5771 self._validatePeerList(self.peers.localPeers, self.peers.remotePeers)
    5772
    5773 - def _validateCollect(self):
    5774 """ 5775 Validates collect configuration. 5776 5777 The target directory must be filled in. The collect mode, archive mode, 5778 ignore file, and recursion level are all optional. The list of absolute 5779 paths to exclude and patterns to exclude may be either C{None} or an 5780 empty list C{[]} if desired. 5781 5782 Each collect directory entry must contain an absolute path to collect, 5783 and then must either be able to take collect mode, archive mode and 5784 ignore file configuration from the parent C{CollectConfig} object, or 5785 must set each value on its own. The list of absolute paths to exclude, 5786 relative paths to exclude and patterns to exclude may be either C{None} 5787 or an empty list C{[]} if desired. Any list of absolute paths to exclude 5788 or patterns to exclude will be combined with the same list in the 5789 C{CollectConfig} object to make the complete list for a given directory. 5790 5791 @raise ValueError: If collect configuration is invalid. 5792 """ 5793 if self.collect is not None: 5794 if self.collect.targetDir is None: 5795 raise ValueError("Collect section target directory must be filled in.") 5796 if self.collect.collectFiles is not None: 5797 for collectFile in self.collect.collectFiles: 5798 if collectFile.absolutePath is None: 5799 raise ValueError("Each collect file must set an absolute path.") 5800 if self.collect.collectMode is None and collectFile.collectMode is None: 5801 raise ValueError("Collect mode must either be set in parent collect section or individual collect file.") 5802 if self.collect.archiveMode is None and collectFile.archiveMode is None: 5803 raise ValueError("Archive mode must either be set in parent collect section or individual collect file.") 5804 if self.collect.collectDirs is not None: 5805 for collectDir in self.collect.collectDirs: 5806 if collectDir.absolutePath is None: 5807 raise ValueError("Each collect directory must set an absolute path.") 5808 if self.collect.collectMode is None and collectDir.collectMode is None: 5809 raise ValueError("Collect mode must either be set in parent collect section or individual collect directory.") 5810 if self.collect.archiveMode is None and collectDir.archiveMode is None: 5811 raise ValueError("Archive mode must either be set in parent collect section or individual collect directory.") 5812 if self.collect.ignoreFile is None and collectDir.ignoreFile is None: 5813 raise ValueError("Ignore file must either be set in parent collect section or individual collect directory.") 5814 if (collectDir.linkDepth is None or collectDir.linkDepth < 1) and collectDir.dereference: 5815 raise ValueError("Dereference flag is only valid when a non-zero link depth is in use.")
    5816
    5817 - def _validateStage(self):
    5818 """ 5819 Validates stage configuration. 5820 5821 The target directory must be filled in, and the peers are 5822 also validated. 5823 5824 Peers are only required in this section if the peers configuration 5825 section is not filled in. However, if any peers are filled in 5826 here, they override the peers configuration and must meet the 5827 validation criteria in L{_validatePeerList}. 5828 5829 @raise ValueError: If stage configuration is invalid. 5830 """ 5831 if self.stage is not None: 5832 if self.stage.targetDir is None: 5833 raise ValueError("Stage section target directory must be filled in.") 5834 if self.peers is None: 5835 # In this case, stage configuration is our only configuration and must be valid. 5836 self._validatePeerList(self.stage.localPeers, self.stage.remotePeers) 5837 else: 5838 # In this case, peers configuration is the default and stage configuration overrides. 5839 # Validation is only needed if it's stage configuration is actually filled in. 5840 if self.stage.hasPeers(): 5841 self._validatePeerList(self.stage.localPeers, self.stage.remotePeers)
    5842
    5843 - def _validateStore(self):
    5844 """ 5845 Validates store configuration. 5846 5847 The device type, drive speed, and blanking behavior are optional. All 5848 other values are required. Missing booleans will be set to defaults. 5849 5850 If blanking behavior is provided, then both a blanking mode and a 5851 blanking factor are required. 5852 5853 The image writer functionality in the C{writer} module is supposed to be 5854 able to handle a device speed of C{None}. 5855 5856 Any caller which needs a "real" (non-C{None}) value for the device type 5857 can use C{DEFAULT_DEVICE_TYPE}, which is guaranteed to be sensible. 5858 5859 This is also where we make sure that the media type -- which is already a 5860 valid type -- matches up properly with the device type. 5861 5862 @raise ValueError: If store configuration is invalid. 5863 """ 5864 if self.store is not None: 5865 if self.store.sourceDir is None: 5866 raise ValueError("Store section source directory must be filled in.") 5867 if self.store.mediaType is None: 5868 raise ValueError("Store section media type must be filled in.") 5869 if self.store.devicePath is None: 5870 raise ValueError("Store section device path must be filled in.") 5871 if self.store.deviceType is None or self.store.deviceType == "cdwriter": 5872 if self.store.mediaType not in VALID_CD_MEDIA_TYPES: 5873 raise ValueError("Media type must match device type.") 5874 elif self.store.deviceType == "dvdwriter": 5875 if self.store.mediaType not in VALID_DVD_MEDIA_TYPES: 5876 raise ValueError("Media type must match device type.") 5877 if self.store.blankBehavior is not None: 5878 if self.store.blankBehavior.blankMode is None and self.store.blankBehavior.blankFactor is None: 5879 raise ValueError("If blanking behavior is provided, all values must be filled in.")
    5880
    5881 - def _validatePurge(self):
    5882 """ 5883 Validates purge configuration. 5884 5885 The list of purge directories may be either C{None} or an empty list 5886 C{[]} if desired. All purge directories must contain a path and a retain 5887 days value. 5888 5889 @raise ValueError: If purge configuration is invalid. 5890 """ 5891 if self.purge is not None: 5892 if self.purge.purgeDirs is not None: 5893 for purgeDir in self.purge.purgeDirs: 5894 if purgeDir.absolutePath is None: 5895 raise ValueError("Each purge directory must set an absolute path.") 5896 if purgeDir.retainDays is None: 5897 raise ValueError("Each purge directory must set a retain days value.")
    5898
    5899 - def _validatePeerList(self, localPeers, remotePeers):
    5900 """ 5901 Validates the set of local and remote peers. 5902 5903 Local peers must be completely filled in, including both name and collect 5904 directory. Remote peers must also fill in the name and collect 5905 directory, but can leave the remote user and rcp command unset. In this 5906 case, the remote user is assumed to match the backup user from the 5907 options section and rcp command is taken directly from the options 5908 section. 5909 5910 @param localPeers: List of local peers 5911 @param remotePeers: List of remote peers 5912 5913 @raise ValueError: If stage configuration is invalid. 5914 """ 5915 if localPeers is None and remotePeers is None: 5916 raise ValueError("Peer list must contain at least one backup peer.") 5917 if localPeers is None and remotePeers is not None: 5918 if len(remotePeers) < 1: 5919 raise ValueError("Peer list must contain at least one backup peer.") 5920 elif localPeers is not None and remotePeers is None: 5921 if len(localPeers) < 1: 5922 raise ValueError("Peer list must contain at least one backup peer.") 5923 elif localPeers is not None and remotePeers is not None: 5924 if len(localPeers) + len(remotePeers) < 1: 5925 raise ValueError("Peer list must contain at least one backup peer.") 5926 names = [] 5927 if localPeers is not None: 5928 for localPeer in localPeers: 5929 if localPeer.name is None: 5930 raise ValueError("Local peers must set a name.") 5931 names.append(localPeer.name) 5932 if localPeer.collectDir is None: 5933 raise ValueError("Local peers must set a collect directory.") 5934 if remotePeers is not None: 5935 for remotePeer in remotePeers: 5936 if remotePeer.name is None: 5937 raise ValueError("Remote peers must set a name.") 5938 names.append(remotePeer.name) 5939 if remotePeer.collectDir is None: 5940 raise ValueError("Remote peers must set a collect directory.") 5941 if (self.options is None or self.options.backupUser is None) and remotePeer.remoteUser is None: 5942 raise ValueError("Remote user must either be set in options section or individual remote peer.") 5943 if (self.options is None or self.options.rcpCommand is None) and remotePeer.rcpCommand is None: 5944 raise ValueError("Remote copy command must either be set in options section or individual remote peer.") 5945 if remotePeer.managed: 5946 if (self.options is None or self.options.rshCommand is None) and remotePeer.rshCommand is None: 5947 raise ValueError("Remote shell command must either be set in options section or individual remote peer.") 5948 if (self.options is None or self.options.cbackCommand is None) and remotePeer.cbackCommand is None: 5949 raise ValueError("Remote cback command must either be set in options section or individual remote peer.") 5950 if ((self.options is None or self.options.managedActions is None or len(self.options.managedActions) < 1) 5951 and (remotePeer.managedActions is None or len(remotePeer.managedActions) < 1)): 5952 raise ValueError("Managed actions list must be set in options section or individual remote peer.") 5953 checkUnique("Duplicate peer names exist:", names)
    5954
    5955 5956 ######################################################################## 5957 # General utility functions 5958 ######################################################################## 5959 5960 -def readByteQuantity(parent, name):
    5961 """ 5962 Read a byte size value from an XML document. 5963 5964 A byte size value is an interpreted string value. If the string value 5965 ends with "MB" or "GB", then the string before that is interpreted as 5966 megabytes or gigabytes. Otherwise, it is intepreted as bytes. 5967 5968 @param parent: Parent node to search beneath. 5969 @param name: Name of node to search for. 5970 5971 @return: ByteQuantity parsed from XML document 5972 """ 5973 data = readString(parent, name) 5974 if data is None: 5975 return None 5976 data = data.strip() 5977 if data.endswith("KB"): 5978 quantity = data[0:data.rfind("KB")].strip() 5979 units = UNIT_KBYTES 5980 elif data.endswith("MB"): 5981 quantity = data[0:data.rfind("MB")].strip() 5982 units = UNIT_MBYTES 5983 elif data.endswith("GB"): 5984 quantity = data[0:data.rfind("GB")].strip() 5985 units = UNIT_GBYTES 5986 else: 5987 quantity = data.strip() 5988 units = UNIT_BYTES 5989 return ByteQuantity(quantity, units)
    5990
    5991 -def addByteQuantityNode(xmlDom, parentNode, nodeName, byteQuantity):
    5992 """ 5993 Adds a text node as the next child of a parent, to contain a byte size. 5994 5995 If the C{byteQuantity} is None, then the node will be created, but will 5996 be empty (i.e. will contain no text node child). 5997 5998 The size in bytes will be normalized. If it is larger than 1.0 GB, it will 5999 be shown in GB ("1.0 GB"). If it is larger than 1.0 MB ("1.0 MB"), it will 6000 be shown in MB. Otherwise, it will be shown in bytes ("423413"). 6001 6002 @param xmlDom: DOM tree as from C{impl.createDocument()}. 6003 @param parentNode: Parent node to create child for. 6004 @param nodeName: Name of the new container node. 6005 @param byteQuantity: ByteQuantity object to put into the XML document 6006 6007 @return: Reference to the newly-created node. 6008 """ 6009 if byteQuantity is None: 6010 byteString = None 6011 elif byteQuantity.units == UNIT_KBYTES: 6012 byteString = "%s KB" % byteQuantity.quantity 6013 elif byteQuantity.units == UNIT_MBYTES: 6014 byteString = "%s MB" % byteQuantity.quantity 6015 elif byteQuantity.units == UNIT_GBYTES: 6016 byteString = "%s GB" % byteQuantity.quantity 6017 else: 6018 byteString = byteQuantity.quantity 6019 return addStringNode(xmlDom, parentNode, nodeName, byteString)
    6020

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mbox-pysrc.html0000664000175000017500000206336712642035645027017 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox
    Package CedarBackup2 :: Package extend :: Module mbox
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.mbox

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2006-2007,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 2 (>= 2.7) 
      29  # Project  : Official Cedar Backup Extensions 
      30  # Purpose  : Provides an extension to back up mbox email files. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides an extension to back up mbox email files. 
      40   
      41  Backing up email 
      42  ================ 
      43   
      44     Email folders (often stored as mbox flatfiles) are not well-suited being backed 
      45     up with an incremental backup like the one offered by Cedar Backup.  This is 
      46     because mbox files often change on a daily basis, forcing the incremental 
      47     backup process to back them up every day in order to avoid losing data.  This 
      48     can result in quite a bit of wasted space when backing up large folders.  (Note 
      49     that the alternative maildir format does not share this problem, since it 
      50     typically uses one file per message.) 
      51   
      52     One solution to this problem is to design a smarter incremental backup process, 
      53     which backs up baseline content on the first day of the week, and then backs up 
      54     only new messages added to that folder on every other day of the week.  This way, 
      55     the backup for any single day is only as large as the messages placed into the 
      56     folder on that day.  The backup isn't as "perfect" as the incremental backup 
      57     process, because it doesn't preserve information about messages deleted from 
      58     the backed-up folder.  However, it should be much more space-efficient, and 
      59     in a recovery situation, it seems better to restore too much data rather 
      60     than too little. 
      61   
      62  What is this extension? 
      63  ======================= 
      64   
      65     This is a Cedar Backup extension used to back up mbox email files via the Cedar 
      66     Backup command line.  Individual mbox files or directories containing mbox 
      67     files can be backed up using the same collect modes allowed for filesystems in 
      68     the standard Cedar Backup collect action: weekly, daily, incremental.  It 
      69     implements the "smart" incremental backup process discussed above, using 
      70     functionality provided by the C{grepmail} utility. 
      71   
      72     This extension requires a new configuration section <mbox> and is intended to 
      73     be run either immediately before or immediately after the standard collect 
      74     action.  Aside from its own configuration, it requires the options and collect 
      75     configuration sections in the standard Cedar Backup configuration file. 
      76   
      77     The mbox action is conceptually similar to the standard collect action, 
      78     except that mbox directories are not collected recursively.  This implies 
      79     some configuration changes (i.e. there's no need for global exclusions or an 
      80     ignore file).  If you back up a directory, all of the mbox files in that 
      81     directory are backed up into a single tar file using the indicated 
      82     compression method. 
      83   
      84  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      85  """ 
      86   
      87  ######################################################################## 
      88  # Imported modules 
      89  ######################################################################## 
      90   
      91  # System modules 
      92  import os 
      93  import logging 
      94  import datetime 
      95  import pickle 
      96  import tempfile 
      97  from bz2 import BZ2File 
      98  from gzip import GzipFile 
      99   
     100  # Cedar Backup modules 
     101  from CedarBackup2.filesystem import FilesystemList, BackupFileList 
     102  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode 
     103  from CedarBackup2.xmlutil import isElement, readChildren, readFirstChild, readString, readStringList 
     104  from CedarBackup2.config import VALID_COLLECT_MODES, VALID_COMPRESS_MODES 
     105  from CedarBackup2.util import isStartOfWeek, buildNormalizedPath 
     106  from CedarBackup2.util import resolveCommand, executeCommand 
     107  from CedarBackup2.util import ObjectTypeList, UnorderedList, RegexList, encodePath, changeOwnership 
     108   
     109   
     110  ######################################################################## 
     111  # Module-wide constants and variables 
     112  ######################################################################## 
     113   
     114  logger = logging.getLogger("CedarBackup2.log.extend.mbox") 
     115   
     116  GREPMAIL_COMMAND = [ "grepmail", ] 
     117  REVISION_PATH_EXTENSION = "mboxlast" 
    
    118 119 120 ######################################################################## 121 # MboxFile class definition 122 ######################################################################## 123 124 -class MboxFile(object):
    125 126 """ 127 Class representing mbox file configuration.. 128 129 The following restrictions exist on data in this class: 130 131 - The absolute path must be absolute. 132 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 133 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 134 135 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, compressMode 136 """ 137
    138 - def __init__(self, absolutePath=None, collectMode=None, compressMode=None):
    139 """ 140 Constructor for the C{MboxFile} class. 141 142 You should never directly instantiate this class. 143 144 @param absolutePath: Absolute path to an mbox file on disk. 145 @param collectMode: Overridden collect mode for this directory. 146 @param compressMode: Overridden compression mode for this directory. 147 """ 148 self._absolutePath = None 149 self._collectMode = None 150 self._compressMode = None 151 self.absolutePath = absolutePath 152 self.collectMode = collectMode 153 self.compressMode = compressMode
    154
    155 - def __repr__(self):
    156 """ 157 Official string representation for class instance. 158 """ 159 return "MboxFile(%s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode)
    160
    161 - def __str__(self):
    162 """ 163 Informal string representation for class instance. 164 """ 165 return self.__repr__()
    166
    167 - def __cmp__(self, other):
    168 """ 169 Definition of equals operator for this class. 170 @param other: Other object to compare to. 171 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 172 """ 173 if other is None: 174 return 1 175 if self.absolutePath != other.absolutePath: 176 if self.absolutePath < other.absolutePath: 177 return -1 178 else: 179 return 1 180 if self.collectMode != other.collectMode: 181 if self.collectMode < other.collectMode: 182 return -1 183 else: 184 return 1 185 if self.compressMode != other.compressMode: 186 if self.compressMode < other.compressMode: 187 return -1 188 else: 189 return 1 190 return 0
    191
    192 - def _setAbsolutePath(self, value):
    193 """ 194 Property target used to set the absolute path. 195 The value must be an absolute path if it is not C{None}. 196 It does not have to exist on disk at the time of assignment. 197 @raise ValueError: If the value is not an absolute path. 198 @raise ValueError: If the value cannot be encoded properly. 199 """ 200 if value is not None: 201 if not os.path.isabs(value): 202 raise ValueError("Absolute path must be, er, an absolute path.") 203 self._absolutePath = encodePath(value)
    204
    205 - def _getAbsolutePath(self):
    206 """ 207 Property target used to get the absolute path. 208 """ 209 return self._absolutePath
    210
    211 - def _setCollectMode(self, value):
    212 """ 213 Property target used to set the collect mode. 214 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 215 @raise ValueError: If the value is not valid. 216 """ 217 if value is not None: 218 if value not in VALID_COLLECT_MODES: 219 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 220 self._collectMode = value
    221
    222 - def _getCollectMode(self):
    223 """ 224 Property target used to get the collect mode. 225 """ 226 return self._collectMode
    227
    228 - def _setCompressMode(self, value):
    229 """ 230 Property target used to set the compress mode. 231 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 232 @raise ValueError: If the value is not valid. 233 """ 234 if value is not None: 235 if value not in VALID_COMPRESS_MODES: 236 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 237 self._compressMode = value
    238
    239 - def _getCompressMode(self):
    240 """ 241 Property target used to get the compress mode. 242 """ 243 return self._compressMode
    244 245 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox file.") 246 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox file.") 247 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox file.")
    248
    249 250 ######################################################################## 251 # MboxDir class definition 252 ######################################################################## 253 254 -class MboxDir(object):
    255 256 """ 257 Class representing mbox directory configuration.. 258 259 The following restrictions exist on data in this class: 260 261 - The absolute path must be absolute. 262 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 263 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 264 265 Unlike collect directory configuration, this is the only place exclusions 266 are allowed (no global exclusions at the <mbox> configuration level). Also, 267 we only allow relative exclusions and there is no configured ignore file. 268 This is because mbox directory backups are not recursive. 269 270 @sort: __init__, __repr__, __str__, __cmp__, absolutePath, collectMode, 271 compressMode, relativeExcludePaths, excludePatterns 272 """ 273
    274 - def __init__(self, absolutePath=None, collectMode=None, compressMode=None, 275 relativeExcludePaths=None, excludePatterns=None):
    276 """ 277 Constructor for the C{MboxDir} class. 278 279 You should never directly instantiate this class. 280 281 @param absolutePath: Absolute path to a mbox file on disk. 282 @param collectMode: Overridden collect mode for this directory. 283 @param compressMode: Overridden compression mode for this directory. 284 @param relativeExcludePaths: List of relative paths to exclude. 285 @param excludePatterns: List of regular expression patterns to exclude 286 """ 287 self._absolutePath = None 288 self._collectMode = None 289 self._compressMode = None 290 self._relativeExcludePaths = None 291 self._excludePatterns = None 292 self.absolutePath = absolutePath 293 self.collectMode = collectMode 294 self.compressMode = compressMode 295 self.relativeExcludePaths = relativeExcludePaths 296 self.excludePatterns = excludePatterns
    297
    298 - def __repr__(self):
    299 """ 300 Official string representation for class instance. 301 """ 302 return "MboxDir(%s, %s, %s, %s, %s)" % (self.absolutePath, self.collectMode, self.compressMode, 303 self.relativeExcludePaths, self.excludePatterns)
    304
    305 - def __str__(self):
    306 """ 307 Informal string representation for class instance. 308 """ 309 return self.__repr__()
    310
    311 - def __cmp__(self, other):
    312 """ 313 Definition of equals operator for this class. 314 @param other: Other object to compare to. 315 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 316 """ 317 if other is None: 318 return 1 319 if self.absolutePath != other.absolutePath: 320 if self.absolutePath < other.absolutePath: 321 return -1 322 else: 323 return 1 324 if self.collectMode != other.collectMode: 325 if self.collectMode < other.collectMode: 326 return -1 327 else: 328 return 1 329 if self.compressMode != other.compressMode: 330 if self.compressMode < other.compressMode: 331 return -1 332 else: 333 return 1 334 if self.relativeExcludePaths != other.relativeExcludePaths: 335 if self.relativeExcludePaths < other.relativeExcludePaths: 336 return -1 337 else: 338 return 1 339 if self.excludePatterns != other.excludePatterns: 340 if self.excludePatterns < other.excludePatterns: 341 return -1 342 else: 343 return 1 344 return 0
    345
    346 - def _setAbsolutePath(self, value):
    347 """ 348 Property target used to set the absolute path. 349 The value must be an absolute path if it is not C{None}. 350 It does not have to exist on disk at the time of assignment. 351 @raise ValueError: If the value is not an absolute path. 352 @raise ValueError: If the value cannot be encoded properly. 353 """ 354 if value is not None: 355 if not os.path.isabs(value): 356 raise ValueError("Absolute path must be, er, an absolute path.") 357 self._absolutePath = encodePath(value)
    358
    359 - def _getAbsolutePath(self):
    360 """ 361 Property target used to get the absolute path. 362 """ 363 return self._absolutePath
    364
    365 - def _setCollectMode(self, value):
    366 """ 367 Property target used to set the collect mode. 368 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 369 @raise ValueError: If the value is not valid. 370 """ 371 if value is not None: 372 if value not in VALID_COLLECT_MODES: 373 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 374 self._collectMode = value
    375
    376 - def _getCollectMode(self):
    377 """ 378 Property target used to get the collect mode. 379 """ 380 return self._collectMode
    381
    382 - def _setCompressMode(self, value):
    383 """ 384 Property target used to set the compress mode. 385 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 386 @raise ValueError: If the value is not valid. 387 """ 388 if value is not None: 389 if value not in VALID_COMPRESS_MODES: 390 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 391 self._compressMode = value
    392
    393 - def _getCompressMode(self):
    394 """ 395 Property target used to get the compress mode. 396 """ 397 return self._compressMode
    398
    399 - def _setRelativeExcludePaths(self, value):
    400 """ 401 Property target used to set the relative exclude paths list. 402 Elements do not have to exist on disk at the time of assignment. 403 """ 404 if value is None: 405 self._relativeExcludePaths = None 406 else: 407 try: 408 saved = self._relativeExcludePaths 409 self._relativeExcludePaths = UnorderedList() 410 self._relativeExcludePaths.extend(value) 411 except Exception, e: 412 self._relativeExcludePaths = saved 413 raise e
    414
    415 - def _getRelativeExcludePaths(self):
    416 """ 417 Property target used to get the relative exclude paths list. 418 """ 419 return self._relativeExcludePaths
    420
    421 - def _setExcludePatterns(self, value):
    422 """ 423 Property target used to set the exclude patterns list. 424 """ 425 if value is None: 426 self._excludePatterns = None 427 else: 428 try: 429 saved = self._excludePatterns 430 self._excludePatterns = RegexList() 431 self._excludePatterns.extend(value) 432 except Exception, e: 433 self._excludePatterns = saved 434 raise e
    435
    436 - def _getExcludePatterns(self):
    437 """ 438 Property target used to get the exclude patterns list. 439 """ 440 return self._excludePatterns
    441 442 absolutePath = property(_getAbsolutePath, _setAbsolutePath, None, doc="Absolute path to the mbox directory.") 443 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Overridden collect mode for this mbox directory.") 444 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Overridden compress mode for this mbox directory.") 445 relativeExcludePaths = property(_getRelativeExcludePaths, _setRelativeExcludePaths, None, "List of relative paths to exclude.") 446 excludePatterns = property(_getExcludePatterns, _setExcludePatterns, None, "List of regular expression patterns to exclude.")
    447
    448 449 ######################################################################## 450 # MboxConfig class definition 451 ######################################################################## 452 453 -class MboxConfig(object):
    454 455 """ 456 Class representing mbox configuration. 457 458 Mbox configuration is used for backing up mbox email files. 459 460 The following restrictions exist on data in this class: 461 462 - The collect mode must be one of the values in L{VALID_COLLECT_MODES}. 463 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 464 - The C{mboxFiles} list must be a list of C{MboxFile} objects 465 - The C{mboxDirs} list must be a list of C{MboxDir} objects 466 467 For the C{mboxFiles} and C{mboxDirs} lists, validation is accomplished 468 through the L{util.ObjectTypeList} list implementation that overrides common 469 list methods and transparently ensures that each element is of the proper 470 type. 471 472 Unlike collect configuration, no global exclusions are allowed on this 473 level. We only allow relative exclusions at the mbox directory level. 474 Also, there is no configured ignore file. This is because mbox directory 475 backups are not recursive. 476 477 @note: Lists within this class are "unordered" for equality comparisons. 478 479 @sort: __init__, __repr__, __str__, __cmp__, collectMode, compressMode, mboxFiles, mboxDirs 480 """ 481
    482 - def __init__(self, collectMode=None, compressMode=None, mboxFiles=None, mboxDirs=None):
    483 """ 484 Constructor for the C{MboxConfig} class. 485 486 @param collectMode: Default collect mode. 487 @param compressMode: Default compress mode. 488 @param mboxFiles: List of mbox files to back up 489 @param mboxDirs: List of mbox directories to back up 490 491 @raise ValueError: If one of the values is invalid. 492 """ 493 self._collectMode = None 494 self._compressMode = None 495 self._mboxFiles = None 496 self._mboxDirs = None 497 self.collectMode = collectMode 498 self.compressMode = compressMode 499 self.mboxFiles = mboxFiles 500 self.mboxDirs = mboxDirs
    501
    502 - def __repr__(self):
    503 """ 504 Official string representation for class instance. 505 """ 506 return "MboxConfig(%s, %s, %s, %s)" % (self.collectMode, self.compressMode, self.mboxFiles, self.mboxDirs)
    507
    508 - def __str__(self):
    509 """ 510 Informal string representation for class instance. 511 """ 512 return self.__repr__()
    513
    514 - def __cmp__(self, other):
    515 """ 516 Definition of equals operator for this class. 517 Lists within this class are "unordered" for equality comparisons. 518 @param other: Other object to compare to. 519 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 520 """ 521 if other is None: 522 return 1 523 if self.collectMode != other.collectMode: 524 if self.collectMode < other.collectMode: 525 return -1 526 else: 527 return 1 528 if self.compressMode != other.compressMode: 529 if self.compressMode < other.compressMode: 530 return -1 531 else: 532 return 1 533 if self.mboxFiles != other.mboxFiles: 534 if self.mboxFiles < other.mboxFiles: 535 return -1 536 else: 537 return 1 538 if self.mboxDirs != other.mboxDirs: 539 if self.mboxDirs < other.mboxDirs: 540 return -1 541 else: 542 return 1 543 return 0
    544
    545 - def _setCollectMode(self, value):
    546 """ 547 Property target used to set the collect mode. 548 If not C{None}, the mode must be one of the values in L{VALID_COLLECT_MODES}. 549 @raise ValueError: If the value is not valid. 550 """ 551 if value is not None: 552 if value not in VALID_COLLECT_MODES: 553 raise ValueError("Collect mode must be one of %s." % VALID_COLLECT_MODES) 554 self._collectMode = value
    555
    556 - def _getCollectMode(self):
    557 """ 558 Property target used to get the collect mode. 559 """ 560 return self._collectMode
    561
    562 - def _setCompressMode(self, value):
    563 """ 564 Property target used to set the compress mode. 565 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 566 @raise ValueError: If the value is not valid. 567 """ 568 if value is not None: 569 if value not in VALID_COMPRESS_MODES: 570 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 571 self._compressMode = value
    572
    573 - def _getCompressMode(self):
    574 """ 575 Property target used to get the compress mode. 576 """ 577 return self._compressMode
    578
    579 - def _setMboxFiles(self, value):
    580 """ 581 Property target used to set the mboxFiles list. 582 Either the value must be C{None} or each element must be an C{MboxFile}. 583 @raise ValueError: If the value is not an C{MboxFile} 584 """ 585 if value is None: 586 self._mboxFiles = None 587 else: 588 try: 589 saved = self._mboxFiles 590 self._mboxFiles = ObjectTypeList(MboxFile, "MboxFile") 591 self._mboxFiles.extend(value) 592 except Exception, e: 593 self._mboxFiles = saved 594 raise e
    595
    596 - def _getMboxFiles(self):
    597 """ 598 Property target used to get the mboxFiles list. 599 """ 600 return self._mboxFiles
    601
    602 - def _setMboxDirs(self, value):
    603 """ 604 Property target used to set the mboxDirs list. 605 Either the value must be C{None} or each element must be an C{MboxDir}. 606 @raise ValueError: If the value is not an C{MboxDir} 607 """ 608 if value is None: 609 self._mboxDirs = None 610 else: 611 try: 612 saved = self._mboxDirs 613 self._mboxDirs = ObjectTypeList(MboxDir, "MboxDir") 614 self._mboxDirs.extend(value) 615 except Exception, e: 616 self._mboxDirs = saved 617 raise e
    618
    619 - def _getMboxDirs(self):
    620 """ 621 Property target used to get the mboxDirs list. 622 """ 623 return self._mboxDirs
    624 625 collectMode = property(_getCollectMode, _setCollectMode, None, doc="Default collect mode.") 626 compressMode = property(_getCompressMode, _setCompressMode, None, doc="Default compress mode.") 627 mboxFiles = property(_getMboxFiles, _setMboxFiles, None, doc="List of mbox files to back up.") 628 mboxDirs = property(_getMboxDirs, _setMboxDirs, None, doc="List of mbox directories to back up.")
    629
    630 631 ######################################################################## 632 # LocalConfig class definition 633 ######################################################################## 634 635 -class LocalConfig(object):
    636 637 """ 638 Class representing this extension's configuration document. 639 640 This is not a general-purpose configuration object like the main Cedar 641 Backup configuration object. Instead, it just knows how to parse and emit 642 Mbox-specific configuration values. Third parties who need to read and 643 write configuration related to this extension should access it through the 644 constructor, C{validate} and C{addConfig} methods. 645 646 @note: Lists within this class are "unordered" for equality comparisons. 647 648 @sort: __init__, __repr__, __str__, __cmp__, mbox, validate, addConfig 649 """ 650
    651 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    652 """ 653 Initializes a configuration object. 654 655 If you initialize the object without passing either C{xmlData} or 656 C{xmlPath} then configuration will be empty and will be invalid until it 657 is filled in properly. 658 659 No reference to the original XML data or original path is saved off by 660 this class. Once the data has been parsed (successfully or not) this 661 original information is discarded. 662 663 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 664 method will be called (with its default arguments) against configuration 665 after successfully parsing any passed-in XML. Keep in mind that even if 666 C{validate} is C{False}, it might not be possible to parse the passed-in 667 XML document if lower-level validations fail. 668 669 @note: It is strongly suggested that the C{validate} option always be set 670 to C{True} (the default) unless there is a specific need to read in 671 invalid configuration from disk. 672 673 @param xmlData: XML data representing configuration. 674 @type xmlData: String data. 675 676 @param xmlPath: Path to an XML file on disk. 677 @type xmlPath: Absolute path to a file on disk. 678 679 @param validate: Validate the document after parsing it. 680 @type validate: Boolean true/false. 681 682 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 683 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 684 @raise ValueError: If the parsed configuration document is not valid. 685 """ 686 self._mbox = None 687 self.mbox = None 688 if xmlData is not None and xmlPath is not None: 689 raise ValueError("Use either xmlData or xmlPath, but not both.") 690 if xmlData is not None: 691 self._parseXmlData(xmlData) 692 if validate: 693 self.validate() 694 elif xmlPath is not None: 695 xmlData = open(xmlPath).read() 696 self._parseXmlData(xmlData) 697 if validate: 698 self.validate()
    699
    700 - def __repr__(self):
    701 """ 702 Official string representation for class instance. 703 """ 704 return "LocalConfig(%s)" % (self.mbox)
    705
    706 - def __str__(self):
    707 """ 708 Informal string representation for class instance. 709 """ 710 return self.__repr__()
    711
    712 - def __cmp__(self, other):
    713 """ 714 Definition of equals operator for this class. 715 Lists within this class are "unordered" for equality comparisons. 716 @param other: Other object to compare to. 717 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 718 """ 719 if other is None: 720 return 1 721 if self.mbox != other.mbox: 722 if self.mbox < other.mbox: 723 return -1 724 else: 725 return 1 726 return 0
    727
    728 - def _setMbox(self, value):
    729 """ 730 Property target used to set the mbox configuration value. 731 If not C{None}, the value must be a C{MboxConfig} object. 732 @raise ValueError: If the value is not a C{MboxConfig} 733 """ 734 if value is None: 735 self._mbox = None 736 else: 737 if not isinstance(value, MboxConfig): 738 raise ValueError("Value must be a C{MboxConfig} object.") 739 self._mbox = value
    740
    741 - def _getMbox(self):
    742 """ 743 Property target used to get the mbox configuration value. 744 """ 745 return self._mbox
    746 747 mbox = property(_getMbox, _setMbox, None, "Mbox configuration in terms of a C{MboxConfig} object.") 748
    749 - def validate(self):
    750 """ 751 Validates configuration represented by the object. 752 753 Mbox configuration must be filled in. Within that, the collect mode and 754 compress mode are both optional, but the list of repositories must 755 contain at least one entry. 756 757 Each configured file or directory must contain an absolute path, and then 758 must be either able to take collect mode and compress mode configuration 759 from the parent C{MboxConfig} object, or must set each value on its own. 760 761 @raise ValueError: If one of the validations fails. 762 """ 763 if self.mbox is None: 764 raise ValueError("Mbox section is required.") 765 if (self.mbox.mboxFiles is None or len(self.mbox.mboxFiles) < 1) and \ 766 (self.mbox.mboxDirs is None or len(self.mbox.mboxDirs) < 1): 767 raise ValueError("At least one mbox file or directory must be configured.") 768 if self.mbox.mboxFiles is not None: 769 for mboxFile in self.mbox.mboxFiles: 770 if mboxFile.absolutePath is None: 771 raise ValueError("Each mbox file must set an absolute path.") 772 if self.mbox.collectMode is None and mboxFile.collectMode is None: 773 raise ValueError("Collect mode must either be set in parent mbox section or individual mbox file.") 774 if self.mbox.compressMode is None and mboxFile.compressMode is None: 775 raise ValueError("Compress mode must either be set in parent mbox section or individual mbox file.") 776 if self.mbox.mboxDirs is not None: 777 for mboxDir in self.mbox.mboxDirs: 778 if mboxDir.absolutePath is None: 779 raise ValueError("Each mbox directory must set an absolute path.") 780 if self.mbox.collectMode is None and mboxDir.collectMode is None: 781 raise ValueError("Collect mode must either be set in parent mbox section or individual mbox directory.") 782 if self.mbox.compressMode is None and mboxDir.compressMode is None: 783 raise ValueError("Compress mode must either be set in parent mbox section or individual mbox directory.")
    784
    785 - def addConfig(self, xmlDom, parentNode):
    786 """ 787 Adds an <mbox> configuration section as the next child of a parent. 788 789 Third parties should use this function to write configuration related to 790 this extension. 791 792 We add the following fields to the document:: 793 794 collectMode //cb_config/mbox/collectMode 795 compressMode //cb_config/mbox/compressMode 796 797 We also add groups of the following items, one list element per 798 item:: 799 800 mboxFiles //cb_config/mbox/file 801 mboxDirs //cb_config/mbox/dir 802 803 The mbox files and mbox directories are added by L{_addMboxFile} and 804 L{_addMboxDir}. 805 806 @param xmlDom: DOM tree as from C{impl.createDocument()}. 807 @param parentNode: Parent that the section should be appended to. 808 """ 809 if self.mbox is not None: 810 sectionNode = addContainerNode(xmlDom, parentNode, "mbox") 811 addStringNode(xmlDom, sectionNode, "collect_mode", self.mbox.collectMode) 812 addStringNode(xmlDom, sectionNode, "compress_mode", self.mbox.compressMode) 813 if self.mbox.mboxFiles is not None: 814 for mboxFile in self.mbox.mboxFiles: 815 LocalConfig._addMboxFile(xmlDom, sectionNode, mboxFile) 816 if self.mbox.mboxDirs is not None: 817 for mboxDir in self.mbox.mboxDirs: 818 LocalConfig._addMboxDir(xmlDom, sectionNode, mboxDir)
    819
    820 - def _parseXmlData(self, xmlData):
    821 """ 822 Internal method to parse an XML string into the object. 823 824 This method parses the XML document into a DOM tree (C{xmlDom}) and then 825 calls a static method to parse the mbox configuration section. 826 827 @param xmlData: XML data to be parsed 828 @type xmlData: String data 829 830 @raise ValueError: If the XML cannot be successfully parsed. 831 """ 832 (xmlDom, parentNode) = createInputDom(xmlData) 833 self._mbox = LocalConfig._parseMbox(parentNode)
    834 835 @staticmethod
    836 - def _parseMbox(parent):
    837 """ 838 Parses an mbox configuration section. 839 840 We read the following individual fields:: 841 842 collectMode //cb_config/mbox/collect_mode 843 compressMode //cb_config/mbox/compress_mode 844 845 We also read groups of the following item, one list element per 846 item:: 847 848 mboxFiles //cb_config/mbox/file 849 mboxDirs //cb_config/mbox/dir 850 851 The mbox files are parsed by L{_parseMboxFiles} and the mbox 852 directories are parsed by L{_parseMboxDirs}. 853 854 @param parent: Parent node to search beneath. 855 856 @return: C{MboxConfig} object or C{None} if the section does not exist. 857 @raise ValueError: If some filled-in value is invalid. 858 """ 859 mbox = None 860 section = readFirstChild(parent, "mbox") 861 if section is not None: 862 mbox = MboxConfig() 863 mbox.collectMode = readString(section, "collect_mode") 864 mbox.compressMode = readString(section, "compress_mode") 865 mbox.mboxFiles = LocalConfig._parseMboxFiles(section) 866 mbox.mboxDirs = LocalConfig._parseMboxDirs(section) 867 return mbox
    868 869 @staticmethod
    870 - def _parseMboxFiles(parent):
    871 """ 872 Reads a list of C{MboxFile} objects from immediately beneath the parent. 873 874 We read the following individual fields:: 875 876 absolutePath abs_path 877 collectMode collect_mode 878 compressMode compess_mode 879 880 @param parent: Parent node to search beneath. 881 882 @return: List of C{MboxFile} objects or C{None} if none are found. 883 @raise ValueError: If some filled-in value is invalid. 884 """ 885 lst = [] 886 for entry in readChildren(parent, "file"): 887 if isElement(entry): 888 mboxFile = MboxFile() 889 mboxFile.absolutePath = readString(entry, "abs_path") 890 mboxFile.collectMode = readString(entry, "collect_mode") 891 mboxFile.compressMode = readString(entry, "compress_mode") 892 lst.append(mboxFile) 893 if lst == []: 894 lst = None 895 return lst
    896 897 @staticmethod
    898 - def _parseMboxDirs(parent):
    899 """ 900 Reads a list of C{MboxDir} objects from immediately beneath the parent. 901 902 We read the following individual fields:: 903 904 absolutePath abs_path 905 collectMode collect_mode 906 compressMode compess_mode 907 908 We also read groups of the following items, one list element per 909 item:: 910 911 relativeExcludePaths exclude/rel_path 912 excludePatterns exclude/pattern 913 914 The exclusions are parsed by L{_parseExclusions}. 915 916 @param parent: Parent node to search beneath. 917 918 @return: List of C{MboxDir} objects or C{None} if none are found. 919 @raise ValueError: If some filled-in value is invalid. 920 """ 921 lst = [] 922 for entry in readChildren(parent, "dir"): 923 if isElement(entry): 924 mboxDir = MboxDir() 925 mboxDir.absolutePath = readString(entry, "abs_path") 926 mboxDir.collectMode = readString(entry, "collect_mode") 927 mboxDir.compressMode = readString(entry, "compress_mode") 928 (mboxDir.relativeExcludePaths, mboxDir.excludePatterns) = LocalConfig._parseExclusions(entry) 929 lst.append(mboxDir) 930 if lst == []: 931 lst = None 932 return lst
    933 934 @staticmethod
    935 - def _parseExclusions(parentNode):
    936 """ 937 Reads exclusions data from immediately beneath the parent. 938 939 We read groups of the following items, one list element per item:: 940 941 relative exclude/rel_path 942 patterns exclude/pattern 943 944 If there are none of some pattern (i.e. no relative path items) then 945 C{None} will be returned for that item in the tuple. 946 947 @param parentNode: Parent node to search beneath. 948 949 @return: Tuple of (relative, patterns) exclusions. 950 """ 951 section = readFirstChild(parentNode, "exclude") 952 if section is None: 953 return (None, None) 954 else: 955 relative = readStringList(section, "rel_path") 956 patterns = readStringList(section, "pattern") 957 return (relative, patterns)
    958 959 @staticmethod
    960 - def _addMboxFile(xmlDom, parentNode, mboxFile):
    961 """ 962 Adds an mbox file container as the next child of a parent. 963 964 We add the following fields to the document:: 965 966 absolutePath file/abs_path 967 collectMode file/collect_mode 968 compressMode file/compress_mode 969 970 The <file> node itself is created as the next child of the parent node. 971 This method only adds one mbox file node. The parent must loop for each 972 mbox file in the C{MboxConfig} object. 973 974 If C{mboxFile} is C{None}, this method call will be a no-op. 975 976 @param xmlDom: DOM tree as from C{impl.createDocument()}. 977 @param parentNode: Parent that the section should be appended to. 978 @param mboxFile: MboxFile to be added to the document. 979 """ 980 if mboxFile is not None: 981 sectionNode = addContainerNode(xmlDom, parentNode, "file") 982 addStringNode(xmlDom, sectionNode, "abs_path", mboxFile.absolutePath) 983 addStringNode(xmlDom, sectionNode, "collect_mode", mboxFile.collectMode) 984 addStringNode(xmlDom, sectionNode, "compress_mode", mboxFile.compressMode)
    985 986 @staticmethod
    987 - def _addMboxDir(xmlDom, parentNode, mboxDir):
    988 """ 989 Adds an mbox directory container as the next child of a parent. 990 991 We add the following fields to the document:: 992 993 absolutePath dir/abs_path 994 collectMode dir/collect_mode 995 compressMode dir/compress_mode 996 997 We also add groups of the following items, one list element per item:: 998 999 relativeExcludePaths dir/exclude/rel_path 1000 excludePatterns dir/exclude/pattern 1001 1002 The <dir> node itself is created as the next child of the parent node. 1003 This method only adds one mbox directory node. The parent must loop for 1004 each mbox directory in the C{MboxConfig} object. 1005 1006 If C{mboxDir} is C{None}, this method call will be a no-op. 1007 1008 @param xmlDom: DOM tree as from C{impl.createDocument()}. 1009 @param parentNode: Parent that the section should be appended to. 1010 @param mboxDir: MboxDir to be added to the document. 1011 """ 1012 if mboxDir is not None: 1013 sectionNode = addContainerNode(xmlDom, parentNode, "dir") 1014 addStringNode(xmlDom, sectionNode, "abs_path", mboxDir.absolutePath) 1015 addStringNode(xmlDom, sectionNode, "collect_mode", mboxDir.collectMode) 1016 addStringNode(xmlDom, sectionNode, "compress_mode", mboxDir.compressMode) 1017 if ((mboxDir.relativeExcludePaths is not None and mboxDir.relativeExcludePaths != []) or 1018 (mboxDir.excludePatterns is not None and mboxDir.excludePatterns != [])): 1019 excludeNode = addContainerNode(xmlDom, sectionNode, "exclude") 1020 if mboxDir.relativeExcludePaths is not None: 1021 for relativePath in mboxDir.relativeExcludePaths: 1022 addStringNode(xmlDom, excludeNode, "rel_path", relativePath) 1023 if mboxDir.excludePatterns is not None: 1024 for pattern in mboxDir.excludePatterns: 1025 addStringNode(xmlDom, excludeNode, "pattern", pattern)
    1026
    1027 1028 ######################################################################## 1029 # Public functions 1030 ######################################################################## 1031 1032 ########################### 1033 # executeAction() function 1034 ########################### 1035 1036 -def executeAction(configPath, options, config):
    1037 """ 1038 Executes the mbox backup action. 1039 1040 @param configPath: Path to configuration file on disk. 1041 @type configPath: String representing a path on disk. 1042 1043 @param options: Program command-line options. 1044 @type options: Options object. 1045 1046 @param config: Program configuration. 1047 @type config: Config object. 1048 1049 @raise ValueError: Under many generic error conditions 1050 @raise IOError: If a backup could not be written for some reason. 1051 """ 1052 logger.debug("Executing mbox extended action.") 1053 newRevision = datetime.datetime.today() # mark here so all actions are after this date/time 1054 if config.options is None or config.collect is None: 1055 raise ValueError("Cedar Backup configuration is not properly filled in.") 1056 local = LocalConfig(xmlPath=configPath) 1057 todayIsStart = isStartOfWeek(config.options.startingDay) 1058 fullBackup = options.full or todayIsStart 1059 logger.debug("Full backup flag is [%s]", fullBackup) 1060 if local.mbox.mboxFiles is not None: 1061 for mboxFile in local.mbox.mboxFiles: 1062 logger.debug("Working with mbox file [%s]", mboxFile.absolutePath) 1063 collectMode = _getCollectMode(local, mboxFile) 1064 compressMode = _getCompressMode(local, mboxFile) 1065 lastRevision = _loadLastRevision(config, mboxFile, fullBackup, collectMode) 1066 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 1067 logger.debug("Mbox file meets criteria to be backed up today.") 1068 _backupMboxFile(config, mboxFile.absolutePath, fullBackup, 1069 collectMode, compressMode, lastRevision, newRevision) 1070 else: 1071 logger.debug("Mbox file will not be backed up, per collect mode.") 1072 if collectMode == 'incr': 1073 _writeNewRevision(config, mboxFile, newRevision) 1074 if local.mbox.mboxDirs is not None: 1075 for mboxDir in local.mbox.mboxDirs: 1076 logger.debug("Working with mbox directory [%s]", mboxDir.absolutePath) 1077 collectMode = _getCollectMode(local, mboxDir) 1078 compressMode = _getCompressMode(local, mboxDir) 1079 lastRevision = _loadLastRevision(config, mboxDir, fullBackup, collectMode) 1080 (excludePaths, excludePatterns) = _getExclusions(mboxDir) 1081 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 1082 logger.debug("Mbox directory meets criteria to be backed up today.") 1083 _backupMboxDir(config, mboxDir.absolutePath, 1084 fullBackup, collectMode, compressMode, 1085 lastRevision, newRevision, 1086 excludePaths, excludePatterns) 1087 else: 1088 logger.debug("Mbox directory will not be backed up, per collect mode.") 1089 if collectMode == 'incr': 1090 _writeNewRevision(config, mboxDir, newRevision) 1091 logger.info("Executed the mbox extended action successfully.")
    1092
    1093 -def _getCollectMode(local, item):
    1094 """ 1095 Gets the collect mode that should be used for an mbox file or directory. 1096 Use file- or directory-specific value if possible, otherwise take from mbox section. 1097 @param local: LocalConfig object. 1098 @param item: Mbox file or directory 1099 @return: Collect mode to use. 1100 """ 1101 if item.collectMode is None: 1102 collectMode = local.mbox.collectMode 1103 else: 1104 collectMode = item.collectMode 1105 logger.debug("Collect mode is [%s]", collectMode) 1106 return collectMode
    1107
    1108 -def _getCompressMode(local, item):
    1109 """ 1110 Gets the compress mode that should be used for an mbox file or directory. 1111 Use file- or directory-specific value if possible, otherwise take from mbox section. 1112 @param local: LocalConfig object. 1113 @param item: Mbox file or directory 1114 @return: Compress mode to use. 1115 """ 1116 if item.compressMode is None: 1117 compressMode = local.mbox.compressMode 1118 else: 1119 compressMode = item.compressMode 1120 logger.debug("Compress mode is [%s]", compressMode) 1121 return compressMode
    1122
    1123 -def _getRevisionPath(config, item):
    1124 """ 1125 Gets the path to the revision file associated with a repository. 1126 @param config: Cedar Backup configuration. 1127 @param item: Mbox file or directory 1128 @return: Absolute path to the revision file associated with the repository. 1129 """ 1130 normalized = buildNormalizedPath(item.absolutePath) 1131 filename = "%s.%s" % (normalized, REVISION_PATH_EXTENSION) 1132 revisionPath = os.path.join(config.options.workingDir, filename) 1133 logger.debug("Revision file path is [%s]", revisionPath) 1134 return revisionPath
    1135
    1136 -def _loadLastRevision(config, item, fullBackup, collectMode):
    1137 """ 1138 Loads the last revision date for this item from disk and returns it. 1139 1140 If this is a full backup, or if the revision file cannot be loaded for some 1141 reason, then C{None} is returned. This indicates that there is no previous 1142 revision, so the entire mail file or directory should be backed up. 1143 1144 @note: We write the actual revision object to disk via pickle, so we don't 1145 deal with the datetime precision or format at all. Whatever's in the object 1146 is what we write. 1147 1148 @param config: Cedar Backup configuration. 1149 @param item: Mbox file or directory 1150 @param fullBackup: Indicates whether this is a full backup 1151 @param collectMode: Indicates the collect mode for this item 1152 1153 @return: Revision date as a datetime.datetime object or C{None}. 1154 """ 1155 revisionPath = _getRevisionPath(config, item) 1156 if fullBackup: 1157 revisionDate = None 1158 logger.debug("Revision file ignored because this is a full backup.") 1159 elif collectMode in ['weekly', 'daily']: 1160 revisionDate = None 1161 logger.debug("No revision file based on collect mode [%s].", collectMode) 1162 else: 1163 logger.debug("Revision file will be used for non-full incremental backup.") 1164 if not os.path.isfile(revisionPath): 1165 revisionDate = None 1166 logger.debug("Revision file [%s] does not exist on disk.", revisionPath) 1167 else: 1168 try: 1169 revisionDate = pickle.load(open(revisionPath, "r")) 1170 logger.debug("Loaded revision file [%s] from disk: [%s]", revisionPath, revisionDate) 1171 except: 1172 revisionDate = None 1173 logger.error("Failed loading revision file [%s] from disk.", revisionPath) 1174 return revisionDate
    1175
    1176 -def _writeNewRevision(config, item, newRevision):
    1177 """ 1178 Writes new revision information to disk. 1179 1180 If we can't write the revision file successfully for any reason, we'll log 1181 the condition but won't throw an exception. 1182 1183 @note: We write the actual revision object to disk via pickle, so we don't 1184 deal with the datetime precision or format at all. Whatever's in the object 1185 is what we write. 1186 1187 @param config: Cedar Backup configuration. 1188 @param item: Mbox file or directory 1189 @param newRevision: Revision date as a datetime.datetime object. 1190 """ 1191 revisionPath = _getRevisionPath(config, item) 1192 try: 1193 pickle.dump(newRevision, open(revisionPath, "w")) 1194 changeOwnership(revisionPath, config.options.backupUser, config.options.backupGroup) 1195 logger.debug("Wrote new revision file [%s] to disk: [%s]", revisionPath, newRevision) 1196 except: 1197 logger.error("Failed to write revision file [%s] to disk.", revisionPath)
    1198
    1199 -def _getExclusions(mboxDir):
    1200 """ 1201 Gets exclusions (file and patterns) associated with an mbox directory. 1202 1203 The returned files value is a list of absolute paths to be excluded from the 1204 backup for a given directory. It is derived from the mbox directory's 1205 relative exclude paths. 1206 1207 The returned patterns value is a list of patterns to be excluded from the 1208 backup for a given directory. It is derived from the mbox directory's list 1209 of patterns. 1210 1211 @param mboxDir: Mbox directory object. 1212 1213 @return: Tuple (files, patterns) indicating what to exclude. 1214 """ 1215 paths = [] 1216 if mboxDir.relativeExcludePaths is not None: 1217 for relativePath in mboxDir.relativeExcludePaths: 1218 paths.append(os.path.join(mboxDir.absolutePath, relativePath)) 1219 patterns = [] 1220 if mboxDir.excludePatterns is not None: 1221 patterns.extend(mboxDir.excludePatterns) 1222 logger.debug("Exclude paths: %s", paths) 1223 logger.debug("Exclude patterns: %s", patterns) 1224 return(paths, patterns)
    1225
    1226 -def _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None):
    1227 """ 1228 Gets the backup file path (including correct extension) associated with an mbox path. 1229 1230 We assume that if the target directory is passed in, that we're backing up a 1231 directory. Under these circumstances, we'll just use the basename of the 1232 individual path as the output file. 1233 1234 @note: The backup path only contains the current date in YYYYMMDD format, 1235 but that's OK because the index information (stored elsewhere) is the actual 1236 date object. 1237 1238 @param config: Cedar Backup configuration. 1239 @param mboxPath: Path to the indicated mbox file or directory 1240 @param compressMode: Compress mode to use for this mbox path 1241 @param newRevision: Revision this backup path represents 1242 @param targetDir: Target directory in which the path should exist 1243 1244 @return: Absolute path to the backup file associated with the repository. 1245 """ 1246 if targetDir is None: 1247 normalizedPath = buildNormalizedPath(mboxPath) 1248 revisionDate = newRevision.strftime("%Y%m%d") 1249 filename = "mbox-%s-%s" % (revisionDate, normalizedPath) 1250 else: 1251 filename = os.path.basename(mboxPath) 1252 if compressMode == 'gzip': 1253 filename = "%s.gz" % filename 1254 elif compressMode == 'bzip2': 1255 filename = "%s.bz2" % filename 1256 if targetDir is None: 1257 backupPath = os.path.join(config.collect.targetDir, filename) 1258 else: 1259 backupPath = os.path.join(targetDir, filename) 1260 logger.debug("Backup file path is [%s]", backupPath) 1261 return backupPath
    1262
    1263 -def _getTarfilePath(config, mboxPath, compressMode, newRevision):
    1264 """ 1265 Gets the tarfile backup file path (including correct extension) associated 1266 with an mbox path. 1267 1268 Along with the path, the tar archive mode is returned in a form that can 1269 be used with L{BackupFileList.generateTarfile}. 1270 1271 @note: The tarfile path only contains the current date in YYYYMMDD format, 1272 but that's OK because the index information (stored elsewhere) is the actual 1273 date object. 1274 1275 @param config: Cedar Backup configuration. 1276 @param mboxPath: Path to the indicated mbox file or directory 1277 @param compressMode: Compress mode to use for this mbox path 1278 @param newRevision: Revision this backup path represents 1279 1280 @return: Tuple of (absolute path to tarfile, tar archive mode) 1281 """ 1282 normalizedPath = buildNormalizedPath(mboxPath) 1283 revisionDate = newRevision.strftime("%Y%m%d") 1284 filename = "mbox-%s-%s.tar" % (revisionDate, normalizedPath) 1285 if compressMode == 'gzip': 1286 filename = "%s.gz" % filename 1287 archiveMode = "targz" 1288 elif compressMode == 'bzip2': 1289 filename = "%s.bz2" % filename 1290 archiveMode = "tarbz2" 1291 else: 1292 archiveMode = "tar" 1293 tarfilePath = os.path.join(config.collect.targetDir, filename) 1294 logger.debug("Tarfile path is [%s]", tarfilePath) 1295 return (tarfilePath, archiveMode)
    1296
    1297 -def _getOutputFile(backupPath, compressMode):
    1298 """ 1299 Opens the output file used for saving backup information. 1300 1301 If the compress mode is "gzip", we'll open a C{GzipFile}, and if the 1302 compress mode is "bzip2", we'll open a C{BZ2File}. Otherwise, we'll just 1303 return an object from the normal C{open()} method. 1304 1305 @param backupPath: Path to file to open. 1306 @param compressMode: Compress mode of file ("none", "gzip", "bzip"). 1307 1308 @return: Output file object. 1309 """ 1310 if compressMode == "gzip": 1311 return GzipFile(backupPath, "w") 1312 elif compressMode == "bzip2": 1313 return BZ2File(backupPath, "w") 1314 else: 1315 return open(backupPath, "w")
    1316
    1317 -def _backupMboxFile(config, absolutePath, 1318 fullBackup, collectMode, compressMode, 1319 lastRevision, newRevision, targetDir=None):
    1320 """ 1321 Backs up an individual mbox file. 1322 1323 @param config: Cedar Backup configuration. 1324 @param absolutePath: Path to mbox file to back up. 1325 @param fullBackup: Indicates whether this should be a full backup. 1326 @param collectMode: Indicates the collect mode for this item 1327 @param compressMode: Compress mode of file ("none", "gzip", "bzip") 1328 @param lastRevision: Date of last backup as datetime.datetime 1329 @param newRevision: Date of new (current) backup as datetime.datetime 1330 @param targetDir: Target directory to write the backed-up file into 1331 1332 @raise ValueError: If some value is missing or invalid. 1333 @raise IOError: If there is a problem backing up the mbox file. 1334 """ 1335 backupPath = _getBackupPath(config, absolutePath, compressMode, newRevision, targetDir=targetDir) 1336 outputFile = _getOutputFile(backupPath, compressMode) 1337 if fullBackup or collectMode != "incr" or lastRevision is None: 1338 args = [ "-a", "-u", absolutePath, ] # remove duplicates but fetch entire mailbox 1339 else: 1340 revisionDate = lastRevision.strftime("%Y-%m-%dT%H:%M:%S") # ISO-8601 format; grepmail calls Date::Parse::str2time() 1341 args = [ "-a", "-u", "-d", "since %s" % revisionDate, absolutePath, ] 1342 command = resolveCommand(GREPMAIL_COMMAND) 1343 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] 1344 if result != 0: 1345 raise IOError("Error [%d] executing grepmail on [%s]." % (result, absolutePath)) 1346 logger.debug("Completed backing up mailbox [%s].", absolutePath) 1347 return backupPath
    1348
    1349 -def _backupMboxDir(config, absolutePath, 1350 fullBackup, collectMode, compressMode, 1351 lastRevision, newRevision, 1352 excludePaths, excludePatterns):
    1353 """ 1354 Backs up a directory containing mbox files. 1355 1356 @param config: Cedar Backup configuration. 1357 @param absolutePath: Path to mbox directory to back up. 1358 @param fullBackup: Indicates whether this should be a full backup. 1359 @param collectMode: Indicates the collect mode for this item 1360 @param compressMode: Compress mode of file ("none", "gzip", "bzip") 1361 @param lastRevision: Date of last backup as datetime.datetime 1362 @param newRevision: Date of new (current) backup as datetime.datetime 1363 @param excludePaths: List of absolute paths to exclude. 1364 @param excludePatterns: List of patterns to exclude. 1365 1366 @raise ValueError: If some value is missing or invalid. 1367 @raise IOError: If there is a problem backing up the mbox file. 1368 """ 1369 try: 1370 tmpdir = tempfile.mkdtemp(dir=config.options.workingDir) 1371 mboxList = FilesystemList() 1372 mboxList.excludeDirs = True 1373 mboxList.excludePaths = excludePaths 1374 mboxList.excludePatterns = excludePatterns 1375 mboxList.addDirContents(absolutePath, recursive=False) 1376 tarList = BackupFileList() 1377 for item in mboxList: 1378 backupPath = _backupMboxFile(config, item, fullBackup, 1379 collectMode, "none", # no need to compress inside compressed tar 1380 lastRevision, newRevision, 1381 targetDir=tmpdir) 1382 tarList.addFile(backupPath) 1383 (tarfilePath, archiveMode) = _getTarfilePath(config, absolutePath, compressMode, newRevision) 1384 tarList.generateTarfile(tarfilePath, archiveMode, ignore=True, flat=True) 1385 changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) 1386 logger.debug("Completed backing up directory [%s].", absolutePath) 1387 finally: 1388 try: 1389 for item in tarList: 1390 if os.path.exists(item): 1391 try: 1392 os.remove(item) 1393 except: pass 1394 except: pass 1395 try: 1396 os.rmdir(tmpdir) 1397 except: pass
    1398

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.collect-module.html0000664000175000017500000012673612642035643027771 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.collect
    Package CedarBackup2 :: Package actions :: Module collect
    [hide private]
    [frames] | no frames]

    Module collect

    source code

    Implements the standard 'collect' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executeCollect(configPath, options, config)
    Executes the collect backup action.
    source code
     
    _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)
    Collects a configured collect file.
    source code
     
    _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel)
    Collects a configured collect directory.
    source code
     
    _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)
    Execute the backup process for the indicated backup list.
    source code
     
    _loadDigest(digestPath)
    Loads the indicated digest path from disk into a dictionary.
    source code
     
    _writeDigest(config, digest, digestPath)
    Writes the digest dictionary to the indicated digest path on disk.
    source code
     
    _getCollectMode(config, item)
    Gets the collect mode that should be used for a collect directory or file.
    source code
     
    _getArchiveMode(config, item)
    Gets the archive mode that should be used for a collect directory or file.
    source code
     
    _getIgnoreFile(config, item)
    Gets the ignore file that should be used for a collect directory or file.
    source code
     
    _getLinkDepth(item)
    Gets the link depth that should be used for a collect directory.
    source code
     
    _getDereference(item)
    Gets the dereference flag that should be used for a collect directory.
    source code
     
    _getRecursionLevel(item)
    Gets the recursion level that should be used for a collect directory.
    source code
     
    _getDigestPath(config, absolutePath)
    Gets the digest path associated with a collect directory or file.
    source code
     
    _getTarfilePath(config, absolutePath, archiveMode)
    Gets the tarfile path (including correct extension) associated with a collect directory.
    source code
     
    _getExclusions(config, collectDir)
    Gets exclusions (file and patterns) associated with a collect directory.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.collect")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executeCollect(configPath, options, config)

    source code 

    Executes the collect backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • TarError - If there is a problem creating a tar file

    Note: When the collect action is complete, we will write a collect indicator to the collect directory, so it's obvious that the collect action has completed. The stage process uses this indicator to decide whether a peer is ready to be staged.

    _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)

    source code 

    Collects a configured collect file.

    The indicated collect file is collected into the indicated tarfile. For files that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten).

    The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect file itself.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path of file to collect.
    • tarfilePath - Path to tarfile that should be created.
    • collectMode - Collect mode to use.
    • archiveMode - Archive mode to use.
    • resetDigest - Reset digest flag.
    • digestPath - Path to digest file on disk, if needed.

    _collectDirectory(config, absolutePath, collectMode, archiveMode, ignoreFile, linkDepth, dereference, resetDigest, excludePaths, excludePatterns, recursionLevel)

    source code 

    Collects a configured collect directory.

    The indicated collect directory is collected into the indicated tarfile. For directories that are collected incrementally, we'll use the indicated digest path and pay attention to the reset digest flag (basically, the reset digest flag ignores any existing digest, but a new digest is always rewritten).

    The caller must decide what the collect and archive modes are, since they can be on both the collect configuration and the collect directory itself.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path of directory to collect.
    • collectMode - Collect mode to use.
    • archiveMode - Archive mode to use.
    • ignoreFile - Ignore file to use.
    • linkDepth - Link depth value to use.
    • dereference - Dereference flag to use.
    • resetDigest - Reset digest flag.
    • excludePaths - List of absolute paths to exclude.
    • excludePatterns - List of patterns to exclude.
    • recursionLevel - Recursion level (zero for no recursion)

    _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)

    source code 

    Execute the backup process for the indicated backup list.

    This function exists mainly to consolidate functionality between the _collectFile and _collectDirectory functions. Those functions build the backup list; this function causes the backup to execute properly and also manages usage of the digest file on disk as explained in their comments.

    For collect files, the digest file will always just contain the single file that is being backed up. This might little wasteful in terms of the number of files that we keep around, but it's consistent and easy to understand.

    Parameters:
    • config - Config object.
    • backupList - List to execute backup for
    • absolutePath - Absolute path of directory or file to collect.
    • tarfilePath - Path to tarfile that should be created.
    • collectMode - Collect mode to use.
    • archiveMode - Archive mode to use.
    • resetDigest - Reset digest flag.
    • digestPath - Path to digest file on disk, if needed.

    _loadDigest(digestPath)

    source code 

    Loads the indicated digest path from disk into a dictionary.

    If we can't load the digest successfully (either because it doesn't exist or for some other reason), then an empty dictionary will be returned - but the condition will be logged.

    Parameters:
    • digestPath - Path to the digest file on disk.
    Returns:
    Dictionary representing contents of digest path.

    _writeDigest(config, digest, digestPath)

    source code 

    Writes the digest dictionary to the indicated digest path on disk.

    If we can't write the digest successfully for any reason, we'll log the condition but won't throw an exception.

    Parameters:
    • config - Config object.
    • digest - Digest dictionary to write to disk.
    • digestPath - Path to the digest file on disk.

    _getCollectMode(config, item)

    source code 

    Gets the collect mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section.

    Parameters:
    • config - Config object.
    • item - CollectFile or CollectDir object
    Returns:
    Collect mode to use.

    _getArchiveMode(config, item)

    source code 

    Gets the archive mode that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section.

    Parameters:
    • config - Config object.
    • item - CollectFile or CollectDir object
    Returns:
    Archive mode to use.

    _getIgnoreFile(config, item)

    source code 

    Gets the ignore file that should be used for a collect directory or file. If possible, use the one on the file or directory, otherwise take from collect section.

    Parameters:
    • config - Config object.
    • item - CollectFile or CollectDir object
    Returns:
    Ignore file to use.

    _getLinkDepth(item)

    source code 

    Gets the link depth that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero).

    Parameters:
    • item - CollectDir object
    Returns:
    Link depth to use.

    _getDereference(item)

    source code 

    Gets the dereference flag that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of False.

    Parameters:
    • item - CollectDir object
    Returns:
    Dereference flag to use.

    _getRecursionLevel(item)

    source code 

    Gets the recursion level that should be used for a collect directory. If possible, use the one on the directory, otherwise set a value of 0 (zero).

    Parameters:
    • item - CollectDir object
    Returns:
    Recursion level to use.

    _getDigestPath(config, absolutePath)

    source code 

    Gets the digest path associated with a collect directory or file.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path to generate digest for
    Returns:
    Absolute path to the digest associated with the collect directory or file.

    _getTarfilePath(config, absolutePath, archiveMode)

    source code 

    Gets the tarfile path (including correct extension) associated with a collect directory.

    Parameters:
    • config - Config object.
    • absolutePath - Absolute path to generate tarfile for
    • archiveMode - Archive mode to use for this tarfile.
    Returns:
    Absolute path to the tarfile associated with the collect directory.

    _getExclusions(config, collectDir)

    source code 

    Gets exclusions (file and patterns) associated with a collect directory.

    The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the collect configuration absolute exclude paths and the collect directory's absolute and relative exclude paths.

    The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the list of patterns from the collect configuration and from the collect directory itself.

    Parameters:
    • config - Config object.
    • collectDir - Collect directory object.
    Returns:
    Tuple (files, patterns) indicating what to exclude.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mbox.MboxFile-class.html0000664000175000017500000006640512642035644030461 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox.MboxFile
    Package CedarBackup2 :: Package extend :: Module mbox :: Class MboxFile
    [hide private]
    [frames] | no frames]

    Class MboxFile

    source code

    object --+
             |
            MboxFile
    

    Class representing mbox file configuration..

    The following restrictions exist on data in this class:

    • The absolute path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, collectMode=None, compressMode=None)
    Constructor for the MboxFile class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path to the mbox file.
      collectMode
    Overridden collect mode for this mbox file.
      compressMode
    Overridden compress mode for this mbox file.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, collectMode=None, compressMode=None)
    (Constructor)

    source code 

    Constructor for the MboxFile class.

    You should never directly instantiate this class.

    Parameters:
    • absolutePath - Absolute path to an mbox file on disk.
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    absolutePath

    Absolute path to the mbox file.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    collectMode

    Overridden collect mode for this mbox file.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this mbox file.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.knapsack-pysrc.html0000664000175000017500000021250412642035644026341 0ustar pronovicpronovic00000000000000 CedarBackup2.knapsack
    Package CedarBackup2 :: Module knapsack
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.knapsack

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2005,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Provides knapsack algorithms used for "fit" decisions 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######## 
     35  # Notes 
     36  ######## 
     37   
     38  """ 
     39  Provides the implementation for various knapsack algorithms. 
     40   
     41  Knapsack algorithms are "fit" algorithms, used to take a set of "things" and 
     42  decide on the optimal way to fit them into some container.  The focus of this 
     43  code is to fit files onto a disc, although the interface (in terms of item, 
     44  item size and capacity size, with no units) is generic enough that it can 
     45  be applied to items other than files. 
     46   
     47  All of the algorithms implemented below assume that "optimal" means "use up as 
     48  much of the disc's capacity as possible", but each produces slightly different 
     49  results.  For instance, the best fit and first fit algorithms tend to include 
     50  fewer files than the worst fit and alternate fit algorithms, even if they use 
     51  the disc space more efficiently. 
     52   
     53  Usually, for a given set of circumstances, it will be obvious to a human which 
     54  algorithm is the right one to use, based on trade-offs between number of files 
     55  included and ideal space utilization.  It's a little more difficult to do this 
     56  programmatically.  For Cedar Backup's purposes (i.e. trying to fit a small 
     57  number of collect-directory tarfiles onto a disc), worst-fit is probably the 
     58  best choice if the goal is to include as many of the collect directories as 
     59  possible. 
     60   
     61  @sort: firstFit, bestFit, worstFit, alternateFit 
     62   
     63  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     64  """ 
     65   
     66  ####################################################################### 
     67  # Public functions 
     68  ####################################################################### 
     69   
     70  ###################### 
     71  # firstFit() function 
     72  ###################### 
     73   
    
    74 -def firstFit(items, capacity):
    75 76 """ 77 Implements the first-fit knapsack algorithm. 78 79 The first-fit algorithm proceeds through an unsorted list of items until 80 running out of items or meeting capacity exactly. If capacity is exceeded, 81 the item that caused capacity to be exceeded is thrown away and the next one 82 is tried. This algorithm generally performs more poorly than the other 83 algorithms both in terms of capacity utilization and item utilization, but 84 can be as much as an order of magnitude faster on large lists of items 85 because it doesn't require any sorting. 86 87 The "size" values in the items and capacity arguments must be comparable, 88 but they are unitless from the perspective of this function. Zero-sized 89 items and capacity are considered degenerate cases. If capacity is zero, 90 no items fit, period, even if the items list contains zero-sized items. 91 92 The dictionary is indexed by its key, and then includes its key. This 93 seems kind of strange on first glance. It works this way to facilitate 94 easy sorting of the list on key if needed. 95 96 The function assumes that the list of items may be used destructively, if 97 needed. This avoids the overhead of having the function make a copy of the 98 list, if this is not required. Callers should pass C{items.copy()} if they 99 do not want their version of the list modified. 100 101 The function returns a list of chosen items and the unitless amount of 102 capacity used by the items. 103 104 @param items: Items to operate on 105 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 106 107 @param capacity: Capacity of container to fit to 108 @type capacity: integer 109 110 @returns: Tuple C{(items, used)} as described above 111 """ 112 113 # Use dict since insert into dict is faster than list append 114 included = { } 115 116 # Search the list as it stands (arbitrary order) 117 used = 0 118 remaining = capacity 119 for key in items.keys(): 120 if remaining == 0: 121 break 122 if remaining - items[key][1] >= 0: 123 included[key] = None 124 used += items[key][1] 125 remaining -= items[key][1] 126 127 # Return results 128 return (included.keys(), used)
    129 130 131 ##################### 132 # bestFit() function 133 ##################### 134
    135 -def bestFit(items, capacity):
    136 137 """ 138 Implements the best-fit knapsack algorithm. 139 140 The best-fit algorithm proceeds through a sorted list of items (sorted from 141 largest to smallest) until running out of items or meeting capacity exactly. 142 If capacity is exceeded, the item that caused capacity to be exceeded is 143 thrown away and the next one is tried. The algorithm effectively includes 144 the minimum number of items possible in its search for optimal capacity 145 utilization. For large lists of mixed-size items, it's not ususual to see 146 the algorithm achieve 100% capacity utilization by including fewer than 1% 147 of the items. Probably because it often has to look at fewer of the items 148 before completing, it tends to be a little faster than the worst-fit or 149 alternate-fit algorithms. 150 151 The "size" values in the items and capacity arguments must be comparable, 152 but they are unitless from the perspective of this function. Zero-sized 153 items and capacity are considered degenerate cases. If capacity is zero, 154 no items fit, period, even if the items list contains zero-sized items. 155 156 The dictionary is indexed by its key, and then includes its key. This 157 seems kind of strange on first glance. It works this way to facilitate 158 easy sorting of the list on key if needed. 159 160 The function assumes that the list of items may be used destructively, if 161 needed. This avoids the overhead of having the function make a copy of the 162 list, if this is not required. Callers should pass C{items.copy()} if they 163 do not want their version of the list modified. 164 165 The function returns a list of chosen items and the unitless amount of 166 capacity used by the items. 167 168 @param items: Items to operate on 169 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 170 171 @param capacity: Capacity of container to fit to 172 @type capacity: integer 173 174 @returns: Tuple C{(items, used)} as described above 175 """ 176 177 # Use dict since insert into dict is faster than list append 178 included = { } 179 180 # Sort the list from largest to smallest 181 itemlist = items.items() 182 itemlist.sort(lambda x, y: cmp(y[1][1], x[1][1])) # sort descending 183 keys = [] 184 for item in itemlist: 185 keys.append(item[0]) 186 187 # Search the list 188 used = 0 189 remaining = capacity 190 for key in keys: 191 if remaining == 0: 192 break 193 if remaining - items[key][1] >= 0: 194 included[key] = None 195 used += items[key][1] 196 remaining -= items[key][1] 197 198 # Return the results 199 return (included.keys(), used)
    200 201 202 ###################### 203 # worstFit() function 204 ###################### 205
    206 -def worstFit(items, capacity):
    207 208 """ 209 Implements the worst-fit knapsack algorithm. 210 211 The worst-fit algorithm proceeds through an a sorted list of items (sorted 212 from smallest to largest) until running out of items or meeting capacity 213 exactly. If capacity is exceeded, the item that caused capacity to be 214 exceeded is thrown away and the next one is tried. The algorithm 215 effectively includes the maximum number of items possible in its search for 216 optimal capacity utilization. It tends to be somewhat slower than either 217 the best-fit or alternate-fit algorithm, probably because on average it has 218 to look at more items before completing. 219 220 The "size" values in the items and capacity arguments must be comparable, 221 but they are unitless from the perspective of this function. Zero-sized 222 items and capacity are considered degenerate cases. If capacity is zero, 223 no items fit, period, even if the items list contains zero-sized items. 224 225 The dictionary is indexed by its key, and then includes its key. This 226 seems kind of strange on first glance. It works this way to facilitate 227 easy sorting of the list on key if needed. 228 229 The function assumes that the list of items may be used destructively, if 230 needed. This avoids the overhead of having the function make a copy of the 231 list, if this is not required. Callers should pass C{items.copy()} if they 232 do not want their version of the list modified. 233 234 The function returns a list of chosen items and the unitless amount of 235 capacity used by the items. 236 237 @param items: Items to operate on 238 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 239 240 @param capacity: Capacity of container to fit to 241 @type capacity: integer 242 243 @returns: Tuple C{(items, used)} as described above 244 """ 245 246 # Use dict since insert into dict is faster than list append 247 included = { } 248 249 # Sort the list from smallest to largest 250 itemlist = items.items() 251 itemlist.sort(lambda x, y: cmp(x[1][1], y[1][1])) # sort ascending 252 keys = [] 253 for item in itemlist: 254 keys.append(item[0]) 255 256 # Search the list 257 used = 0 258 remaining = capacity 259 for key in keys: 260 if remaining == 0: 261 break 262 if remaining - items[key][1] >= 0: 263 included[key] = None 264 used += items[key][1] 265 remaining -= items[key][1] 266 267 # Return results 268 return (included.keys(), used)
    269 270 271 ########################## 272 # alternateFit() function 273 ########################## 274
    275 -def alternateFit(items, capacity):
    276 277 """ 278 Implements the alternate-fit knapsack algorithm. 279 280 This algorithm (which I'm calling "alternate-fit" as in "alternate from one 281 to the other") tries to balance small and large items to achieve better 282 end-of-disk performance. Instead of just working one direction through a 283 list, it alternately works from the start and end of a sorted list (sorted 284 from smallest to largest), throwing away any item which causes capacity to 285 be exceeded. The algorithm tends to be slower than the best-fit and 286 first-fit algorithms, and slightly faster than the worst-fit algorithm, 287 probably because of the number of items it considers on average before 288 completing. It often achieves slightly better capacity utilization than the 289 worst-fit algorithm, while including slighly fewer items. 290 291 The "size" values in the items and capacity arguments must be comparable, 292 but they are unitless from the perspective of this function. Zero-sized 293 items and capacity are considered degenerate cases. If capacity is zero, 294 no items fit, period, even if the items list contains zero-sized items. 295 296 The dictionary is indexed by its key, and then includes its key. This 297 seems kind of strange on first glance. It works this way to facilitate 298 easy sorting of the list on key if needed. 299 300 The function assumes that the list of items may be used destructively, if 301 needed. This avoids the overhead of having the function make a copy of the 302 list, if this is not required. Callers should pass C{items.copy()} if they 303 do not want their version of the list modified. 304 305 The function returns a list of chosen items and the unitless amount of 306 capacity used by the items. 307 308 @param items: Items to operate on 309 @type items: dictionary, keyed on item, of C{(item, size)} tuples, item as string and size as integer 310 311 @param capacity: Capacity of container to fit to 312 @type capacity: integer 313 314 @returns: Tuple C{(items, used)} as described above 315 """ 316 317 # Use dict since insert into dict is faster than list append 318 included = { } 319 320 # Sort the list from smallest to largest 321 itemlist = items.items() 322 itemlist.sort(lambda x, y: cmp(x[1][1], y[1][1])) # sort ascending 323 keys = [] 324 for item in itemlist: 325 keys.append(item[0]) 326 327 # Search the list 328 used = 0 329 remaining = capacity 330 331 front = keys[0:len(keys)/2] 332 back = keys[len(keys)/2:len(keys)] 333 back.reverse() 334 335 i = 0 336 j = 0 337 338 while remaining > 0 and (i < len(front) or j < len(back)): 339 if i < len(front): 340 if remaining - items[front[i]][1] >= 0: 341 included[front[i]] = None 342 used += items[front[i]][1] 343 remaining -= items[front[i]][1] 344 i += 1 345 if j < len(back): 346 if remaining - items[back[j]][1] >= 0: 347 included[back[j]] = None 348 used += items[back[j]][1] 349 remaining -= items[back[j]][1] 350 j += 1 351 352 # Return results 353 return (included.keys(), used)
    354

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.stage-pysrc.html0000664000175000017500000045173712642035645027326 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.stage
    Package CedarBackup2 :: Package actions :: Module stage
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.stage

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Implements the standard 'stage' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'stage' action. 
     40  @sort: executeStage 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import os 
     51  import time 
     52  import logging 
     53   
     54  # Cedar Backup modules 
     55  from CedarBackup2.peer import RemotePeer, LocalPeer 
     56  from CedarBackup2.util import getUidGid, changeOwnership, isStartOfWeek, isRunningAsRoot 
     57  from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR 
     58  from CedarBackup2.actions.util import writeIndicatorFile 
     59   
     60   
     61  ######################################################################## 
     62  # Module-wide constants and variables 
     63  ######################################################################## 
     64   
     65  logger = logging.getLogger("CedarBackup2.log.actions.stage") 
     66   
     67   
     68  ######################################################################## 
     69  # Public functions 
     70  ######################################################################## 
     71   
     72  ########################## 
     73  # executeStage() function 
     74  ########################## 
     75   
    
    76 -def executeStage(configPath, options, config):
    77 """ 78 Executes the stage backup action. 79 80 @note: The daily directory is derived once and then we stick with it, just 81 in case a backup happens to span midnite. 82 83 @note: As portions of the stage action is complete, we will write various 84 indicator files so that it's obvious what actions have been completed. Each 85 peer gets a stage indicator in its collect directory, and then the master 86 gets a stage indicator in its daily staging directory. The store process 87 uses the master's stage indicator to decide whether a directory is ready to 88 be stored. Currently, nothing uses the indicator at each peer, and it 89 exists for reference only. 90 91 @param configPath: Path to configuration file on disk. 92 @type configPath: String representing a path on disk. 93 94 @param options: Program command-line options. 95 @type options: Options object. 96 97 @param config: Program configuration. 98 @type config: Config object. 99 100 @raise ValueError: Under many generic error conditions 101 @raise IOError: If there are problems reading or writing files. 102 """ 103 logger.debug("Executing the 'stage' action.") 104 if config.options is None or config.stage is None: 105 raise ValueError("Stage configuration is not properly filled in.") 106 dailyDir = _getDailyDir(config) 107 localPeers = _getLocalPeers(config) 108 remotePeers = _getRemotePeers(config) 109 allPeers = localPeers + remotePeers 110 stagingDirs = _createStagingDirs(config, dailyDir, allPeers) 111 for peer in allPeers: 112 logger.info("Staging peer [%s].", peer.name) 113 ignoreFailures = _getIgnoreFailuresFlag(options, config, peer) 114 if not peer.checkCollectIndicator(): 115 if not ignoreFailures: 116 logger.error("Peer [%s] was not ready to be staged.", peer.name) 117 else: 118 logger.info("Peer [%s] was not ready to be staged.", peer.name) 119 continue 120 logger.debug("Found collect indicator.") 121 targetDir = stagingDirs[peer.name] 122 if isRunningAsRoot(): 123 # Since we're running as root, we can change ownership 124 ownership = getUidGid(config.options.backupUser, config.options.backupGroup) 125 logger.debug("Using target dir [%s], ownership [%d:%d].", targetDir, ownership[0], ownership[1]) 126 else: 127 # Non-root cannot change ownership, so don't set it 128 ownership = None 129 logger.debug("Using target dir [%s], ownership [None].", targetDir) 130 try: 131 count = peer.stagePeer(targetDir=targetDir, ownership=ownership) # note: utilize effective user's default umask 132 logger.info("Staged %d files for peer [%s].", count, peer.name) 133 peer.writeStageIndicator() 134 except (ValueError, IOError, OSError), e: 135 logger.error("Error staging [%s]: %s", peer.name, e) 136 writeIndicatorFile(dailyDir, STAGE_INDICATOR, config.options.backupUser, config.options.backupGroup) 137 logger.info("Executed the 'stage' action successfully.")
    138 139 140 ######################################################################## 141 # Private utility functions 142 ######################################################################## 143 144 ################################ 145 # _createStagingDirs() function 146 ################################ 147
    148 -def _createStagingDirs(config, dailyDir, peers):
    149 """ 150 Creates staging directories as required. 151 152 The main staging directory is the passed in daily directory, something like 153 C{staging/2002/05/23}. Then, individual peers get their own directories, 154 i.e. C{staging/2002/05/23/host}. 155 156 @param config: Config object. 157 @param dailyDir: Daily staging directory. 158 @param peers: List of all configured peers. 159 160 @return: Dictionary mapping peer name to staging directory. 161 """ 162 mapping = {} 163 if os.path.isdir(dailyDir): 164 logger.warn("Staging directory [%s] already existed.", dailyDir) 165 else: 166 try: 167 logger.debug("Creating staging directory [%s].", dailyDir) 168 os.makedirs(dailyDir) 169 for path in [ dailyDir, os.path.join(dailyDir, ".."), os.path.join(dailyDir, "..", ".."), ]: 170 changeOwnership(path, config.options.backupUser, config.options.backupGroup) 171 except Exception, e: 172 raise Exception("Unable to create staging directory: %s" % e) 173 for peer in peers: 174 peerDir = os.path.join(dailyDir, peer.name) 175 mapping[peer.name] = peerDir 176 if os.path.isdir(peerDir): 177 logger.warn("Peer staging directory [%s] already existed.", peerDir) 178 else: 179 try: 180 logger.debug("Creating peer staging directory [%s].", peerDir) 181 os.makedirs(peerDir) 182 changeOwnership(peerDir, config.options.backupUser, config.options.backupGroup) 183 except Exception, e: 184 raise Exception("Unable to create staging directory: %s" % e) 185 return mapping
    186 187 188 ######################################################################## 189 # Private attribute "getter" functions 190 ######################################################################## 191 192 #################################### 193 # _getIgnoreFailuresFlag() function 194 #################################### 195
    196 -def _getIgnoreFailuresFlag(options, config, peer):
    197 """ 198 Gets the ignore failures flag based on options, configuration, and peer. 199 @param options: Options object 200 @param config: Configuration object 201 @param peer: Peer to check 202 @return: Whether to ignore stage failures for this peer 203 """ 204 logger.debug("Ignore failure mode for this peer: %s", peer.ignoreFailureMode) 205 if peer.ignoreFailureMode is None or peer.ignoreFailureMode == "none": 206 return False 207 elif peer.ignoreFailureMode == "all": 208 return True 209 else: 210 if options.full or isStartOfWeek(config.options.startingDay): 211 return peer.ignoreFailureMode == "weekly" 212 else: 213 return peer.ignoreFailureMode == "daily"
    214 215 216 ########################## 217 # _getDailyDir() function 218 ########################## 219
    220 -def _getDailyDir(config):
    221 """ 222 Gets the daily staging directory. 223 224 This is just a directory in the form C{staging/YYYY/MM/DD}, i.e. 225 C{staging/2000/10/07}, except it will be an absolute path based on 226 C{config.stage.targetDir}. 227 228 @param config: Config object 229 230 @return: Path of daily staging directory. 231 """ 232 dailyDir = os.path.join(config.stage.targetDir, time.strftime(DIR_TIME_FORMAT)) 233 logger.debug("Daily staging directory is [%s].", dailyDir) 234 return dailyDir
    235 236 237 ############################ 238 # _getLocalPeers() function 239 ############################ 240
    241 -def _getLocalPeers(config):
    242 """ 243 Return a list of L{LocalPeer} objects based on configuration. 244 @param config: Config object. 245 @return: List of L{LocalPeer} objects. 246 """ 247 localPeers = [] 248 configPeers = None 249 if config.stage.hasPeers(): 250 logger.debug("Using list of local peers from stage configuration.") 251 configPeers = config.stage.localPeers 252 elif config.peers is not None and config.peers.hasPeers(): 253 logger.debug("Using list of local peers from peers configuration.") 254 configPeers = config.peers.localPeers 255 if configPeers is not None: 256 for peer in configPeers: 257 localPeer = LocalPeer(peer.name, peer.collectDir, peer.ignoreFailureMode) 258 localPeers.append(localPeer) 259 logger.debug("Found local peer: [%s]", localPeer.name) 260 return localPeers
    261 262 263 ############################# 264 # _getRemotePeers() function 265 ############################# 266
    267 -def _getRemotePeers(config):
    268 """ 269 Return a list of L{RemotePeer} objects based on configuration. 270 @param config: Config object. 271 @return: List of L{RemotePeer} objects. 272 """ 273 remotePeers = [] 274 configPeers = None 275 if config.stage.hasPeers(): 276 logger.debug("Using list of remote peers from stage configuration.") 277 configPeers = config.stage.remotePeers 278 elif config.peers is not None and config.peers.hasPeers(): 279 logger.debug("Using list of remote peers from peers configuration.") 280 configPeers = config.peers.remotePeers 281 if configPeers is not None: 282 for peer in configPeers: 283 remoteUser = _getRemoteUser(config, peer) 284 localUser = _getLocalUser(config) 285 rcpCommand = _getRcpCommand(config, peer) 286 remotePeer = RemotePeer(peer.name, peer.collectDir, config.options.workingDir, 287 remoteUser, rcpCommand, localUser, 288 ignoreFailureMode=peer.ignoreFailureMode) 289 remotePeers.append(remotePeer) 290 logger.debug("Found remote peer: [%s]", remotePeer.name) 291 return remotePeers
    292 293 294 ############################ 295 # _getRemoteUser() function 296 ############################ 297
    298 -def _getRemoteUser(config, remotePeer):
    299 """ 300 Gets the remote user associated with a remote peer. 301 Use peer's if possible, otherwise take from options section. 302 @param config: Config object. 303 @param remotePeer: Configuration-style remote peer object. 304 @return: Name of remote user associated with remote peer. 305 """ 306 if remotePeer.remoteUser is None: 307 return config.options.backupUser 308 return remotePeer.remoteUser
    309 310 311 ########################### 312 # _getLocalUser() function 313 ########################### 314
    315 -def _getLocalUser(config):
    316 """ 317 Gets the remote user associated with a remote peer. 318 @param config: Config object. 319 @return: Name of local user that should be used 320 """ 321 if not isRunningAsRoot(): 322 return None 323 return config.options.backupUser
    324 325 326 ############################ 327 # _getRcpCommand() function 328 ############################ 329
    330 -def _getRcpCommand(config, remotePeer):
    331 """ 332 Gets the RCP command associated with a remote peer. 333 Use peer's if possible, otherwise take from options section. 334 @param config: Config object. 335 @param remotePeer: Configuration-style remote peer object. 336 @return: RCP command associated with remote peer. 337 """ 338 if remotePeer.rcpCommand is None: 339 return config.options.rcpCommand 340 return remotePeer.rcpCommand
    341

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.dvdwriter-pysrc.html0000664000175000017500000110414312642035645030257 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter
    Package CedarBackup2 :: Package writers :: Module dvdwriter
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.writers.dvdwriter

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007-2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Provides functionality related to DVD writer devices. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides functionality related to DVD writer devices. 
     40   
     41  @sort: MediaDefinition, DvdWriter, MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW 
     42   
     43  @var MEDIA_DVDPLUSR: Constant representing DVD+R media. 
     44  @var MEDIA_DVDPLUSRW: Constant representing DVD+RW media. 
     45   
     46  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     47  @author: Dmitry Rutsky <rutsky@inbox.ru> 
     48  """ 
     49   
     50  ######################################################################## 
     51  # Imported modules 
     52  ######################################################################## 
     53   
     54  # System modules 
     55  import os 
     56  import re 
     57  import logging 
     58  import tempfile 
     59  import time 
     60   
     61  # Cedar Backup modules 
     62  from CedarBackup2.writers.util import IsoImage 
     63  from CedarBackup2.util import resolveCommand, executeCommand 
     64  from CedarBackup2.util import convertSize, displayBytes, encodePath 
     65  from CedarBackup2.util import UNIT_SECTORS, UNIT_BYTES, UNIT_GBYTES 
     66  from CedarBackup2.writers.util import validateDevice, validateDriveSpeed 
     67   
     68   
     69  ######################################################################## 
     70  # Module-wide constants and variables 
     71  ######################################################################## 
     72   
     73  logger = logging.getLogger("CedarBackup2.log.writers.dvdwriter") 
     74   
     75  MEDIA_DVDPLUSR  = 1 
     76  MEDIA_DVDPLUSRW = 2 
     77   
     78  GROWISOFS_COMMAND = [ "growisofs", ] 
     79  EJECT_COMMAND     = [ "eject", ] 
    
    80 81 82 ######################################################################## 83 # MediaDefinition class definition 84 ######################################################################## 85 86 -class MediaDefinition(object):
    87 88 """ 89 Class encapsulating information about DVD media definitions. 90 91 The following media types are accepted: 92 93 - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) 94 - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) 95 96 Note that the capacity attribute returns capacity in terms of ISO sectors 97 (C{util.ISO_SECTOR_SIZE)}. This is for compatibility with the CD writer 98 functionality. 99 100 The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes 101 of 1024*1024*1024 bytes per gigabyte. 102 103 @sort: __init__, mediaType, rewritable, capacity 104 """ 105
    106 - def __init__(self, mediaType):
    107 """ 108 Creates a media definition for the indicated media type. 109 @param mediaType: Type of the media, as discussed above. 110 @raise ValueError: If the media type is unknown or unsupported. 111 """ 112 self._mediaType = None 113 self._rewritable = False 114 self._capacity = 0.0 115 self._setValues(mediaType)
    116
    117 - def _setValues(self, mediaType):
    118 """ 119 Sets values based on media type. 120 @param mediaType: Type of the media, as discussed above. 121 @raise ValueError: If the media type is unknown or unsupported. 122 """ 123 if mediaType not in [MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW, ]: 124 raise ValueError("Invalid media type %d." % mediaType) 125 self._mediaType = mediaType 126 if self._mediaType == MEDIA_DVDPLUSR: 127 self._rewritable = False 128 self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB 129 elif self._mediaType == MEDIA_DVDPLUSRW: 130 self._rewritable = True 131 self._capacity = convertSize(4.4, UNIT_GBYTES, UNIT_SECTORS) # 4.4 "true" GB = 4.7 "marketing" GB
    132
    133 - def _getMediaType(self):
    134 """ 135 Property target used to get the media type value. 136 """ 137 return self._mediaType
    138
    139 - def _getRewritable(self):
    140 """ 141 Property target used to get the rewritable flag value. 142 """ 143 return self._rewritable
    144
    145 - def _getCapacity(self):
    146 """ 147 Property target used to get the capacity value. 148 """ 149 return self._capacity
    150 151 mediaType = property(_getMediaType, None, None, doc="Configured media type.") 152 rewritable = property(_getRewritable, None, None, doc="Boolean indicating whether the media is rewritable.") 153 capacity = property(_getCapacity, None, None, doc="Total capacity of media in 2048-byte sectors.")
    154
    155 156 ######################################################################## 157 # MediaCapacity class definition 158 ######################################################################## 159 160 -class MediaCapacity(object):
    161 162 """ 163 Class encapsulating information about DVD media capacity. 164 165 Space used and space available do not include any information about media 166 lead-in or other overhead. 167 168 @sort: __init__, bytesUsed, bytesAvailable, totalCapacity, utilized 169 """ 170
    171 - def __init__(self, bytesUsed, bytesAvailable):
    172 """ 173 Initializes a capacity object. 174 @raise ValueError: If the bytes used and available values are not floats. 175 """ 176 self._bytesUsed = float(bytesUsed) 177 self._bytesAvailable = float(bytesAvailable)
    178
    179 - def __str__(self):
    180 """ 181 Informal string representation for class instance. 182 """ 183 return "utilized %s of %s (%.2f%%)" % (displayBytes(self.bytesUsed), displayBytes(self.totalCapacity), self.utilized)
    184
    185 - def _getBytesUsed(self):
    186 """ 187 Property target used to get the bytes-used value. 188 """ 189 return self._bytesUsed
    190
    191 - def _getBytesAvailable(self):
    192 """ 193 Property target available to get the bytes-available value. 194 """ 195 return self._bytesAvailable
    196
    197 - def _getTotalCapacity(self):
    198 """ 199 Property target to get the total capacity (used + available). 200 """ 201 return self.bytesUsed + self.bytesAvailable
    202
    203 - def _getUtilized(self):
    204 """ 205 Property target to get the percent of capacity which is utilized. 206 """ 207 if self.bytesAvailable <= 0.0: 208 return 100.0 209 elif self.bytesUsed <= 0.0: 210 return 0.0 211 return (self.bytesUsed / self.totalCapacity) * 100.0
    212 213 bytesUsed = property(_getBytesUsed, None, None, doc="Space used on disc, in bytes.") 214 bytesAvailable = property(_getBytesAvailable, None, None, doc="Space available on disc, in bytes.") 215 totalCapacity = property(_getTotalCapacity, None, None, doc="Total capacity of the disc, in bytes.") 216 utilized = property(_getUtilized, None, None, "Percentage of the total capacity which is utilized.")
    217
    218 219 ######################################################################## 220 # _ImageProperties class definition 221 ######################################################################## 222 223 -class _ImageProperties(object):
    224 """ 225 Simple value object to hold image properties for C{DvdWriter}. 226 """
    227 - def __init__(self):
    228 self.newDisc = False 229 self.tmpdir = None 230 self.mediaLabel = None 231 self.entries = None # dict mapping path to graft point
    232
    233 234 ######################################################################## 235 # DvdWriter class definition 236 ######################################################################## 237 238 -class DvdWriter(object):
    239 240 ###################### 241 # Class documentation 242 ###################### 243 244 """ 245 Class representing a device that knows how to write some kinds of DVD media. 246 247 Summary 248 ======= 249 250 This is a class representing a device that knows how to write some kinds 251 of DVD media. It provides common operations for the device, such as 252 ejecting the media and writing data to the media. 253 254 This class is implemented in terms of the C{eject} and C{growisofs} 255 utilities, all of which should be available on most UN*X platforms. 256 257 Image Writer Interface 258 ====================== 259 260 The following methods make up the "image writer" interface shared 261 with other kinds of writers:: 262 263 __init__ 264 initializeImage() 265 addImageEntry() 266 writeImage() 267 setImageNewDisc() 268 retrieveCapacity() 269 getEstimatedImageSize() 270 271 Only these methods will be used by other Cedar Backup functionality 272 that expects a compatible image writer. 273 274 The media attribute is also assumed to be available. 275 276 Unlike the C{CdWriter}, the C{DvdWriter} can only operate in terms of 277 filesystem devices, not SCSI devices. So, although the constructor 278 interface accepts a SCSI device parameter for the sake of compatibility, 279 it's not used. 280 281 Media Types 282 =========== 283 284 This class knows how to write to DVD+R and DVD+RW media, represented 285 by the following constants: 286 287 - C{MEDIA_DVDPLUSR}: DVD+R media (4.4 GB capacity) 288 - C{MEDIA_DVDPLUSRW}: DVD+RW media (4.4 GB capacity) 289 290 The difference is that DVD+RW media can be rewritten, while DVD+R media 291 cannot be (although at present, C{DvdWriter} does not really 292 differentiate between rewritable and non-rewritable media). 293 294 The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes 295 of 1024*1024*1024 bytes per gigabyte. 296 297 The underlying C{growisofs} utility does support other kinds of media 298 (including DVD-R, DVD-RW and BlueRay) which work somewhat differently 299 than standard DVD+R and DVD+RW media. I don't support these other kinds 300 of media because I haven't had any opportunity to work with them. The 301 same goes for dual-layer media of any type. 302 303 Device Attributes vs. Media Attributes 304 ====================================== 305 306 As with the cdwriter functionality, a given dvdwriter instance has two 307 different kinds of attributes associated with it. I call these device 308 attributes and media attributes. 309 310 Device attributes are things which can be determined without looking at 311 the media. Media attributes are attributes which vary depending on the 312 state of the media. In general, device attributes are available via 313 instance variables and are constant over the life of an object, while 314 media attributes can be retrieved through method calls. 315 316 Compared to cdwriters, dvdwriters have very few attributes. This is due 317 to differences between the way C{growisofs} works relative to 318 C{cdrecord}. 319 320 Media Capacity 321 ============== 322 323 One major difference between the C{cdrecord}/C{mkisofs} utilities used by 324 the cdwriter class and the C{growisofs} utility used here is that the 325 process of estimating remaining capacity and image size is more 326 straightforward with C{cdrecord}/C{mkisofs} than with C{growisofs}. 327 328 In this class, remaining capacity is calculated by asking doing a dry run 329 of C{growisofs} and grabbing some information from the output of that 330 command. Image size is estimated by asking the C{IsoImage} class for an 331 estimate and then adding on a "fudge factor" determined through 332 experimentation. 333 334 Testing 335 ======= 336 337 It's rather difficult to test this code in an automated fashion, even if 338 you have access to a physical DVD writer drive. It's even more difficult 339 to test it if you are running on some build daemon (think of a Debian 340 autobuilder) which can't be expected to have any hardware or any media 341 that you could write to. 342 343 Because of this, some of the implementation below is in terms of static 344 methods that are supposed to take defined actions based on their 345 arguments. Public methods are then implemented in terms of a series of 346 calls to simplistic static methods. This way, we can test as much as 347 possible of the "difficult" functionality via testing the static methods, 348 while hoping that if the static methods are called appropriately, things 349 will work properly. It's not perfect, but it's much better than no 350 testing at all. 351 352 @sort: __init__, isRewritable, retrieveCapacity, openTray, closeTray, refreshMedia, 353 initializeImage, addImageEntry, writeImage, setImageNewDisc, getEstimatedImageSize, 354 _writeImage, _getEstimatedImageSize, _searchForOverburn, _buildWriteArgs, 355 device, scsiId, hardwareId, driveSpeed, media, deviceHasTray, deviceCanEject 356 """ 357 358 ############## 359 # Constructor 360 ############## 361
    362 - def __init__(self, device, scsiId=None, driveSpeed=None, 363 mediaType=MEDIA_DVDPLUSRW, noEject=False, 364 refreshMediaDelay=0, ejectDelay=0, unittest=False):
    365 """ 366 Initializes a DVD writer object. 367 368 Since C{growisofs} can only address devices using the device path (i.e. 369 C{/dev/dvd}), the hardware id will always be set based on the device. If 370 passed in, it will be saved for reference purposes only. 371 372 We have no way to query the device to ask whether it has a tray or can be 373 safely opened and closed. So, the C{noEject} flag is used to set these 374 values. If C{noEject=False}, then we assume a tray exists and open/close 375 is safe. If C{noEject=True}, then we assume that there is no tray and 376 open/close is not safe. 377 378 @note: The C{unittest} parameter should never be set to C{True} 379 outside of Cedar Backup code. It is intended for use in unit testing 380 Cedar Backup internals and has no other sensible purpose. 381 382 @param device: Filesystem device associated with this writer. 383 @type device: Absolute path to a filesystem device, i.e. C{/dev/dvd} 384 385 @param scsiId: SCSI id for the device (optional, for reference only). 386 @type scsiId: If provided, SCSI id in the form C{[<method>:]scsibus,target,lun} 387 388 @param driveSpeed: Speed at which the drive writes. 389 @type driveSpeed: Use C{2} for 2x device, etc. or C{None} to use device default. 390 391 @param mediaType: Type of the media that is assumed to be in the drive. 392 @type mediaType: One of the valid media type as discussed above. 393 394 @param noEject: Tells Cedar Backup that the device cannot safely be ejected 395 @type noEject: Boolean true/false 396 397 @param refreshMediaDelay: Refresh media delay to use, if any 398 @type refreshMediaDelay: Number of seconds, an integer >= 0 399 400 @param ejectDelay: Eject delay to use, if any 401 @type ejectDelay: Number of seconds, an integer >= 0 402 403 @param unittest: Turns off certain validations, for use in unit testing. 404 @type unittest: Boolean true/false 405 406 @raise ValueError: If the device is not valid for some reason. 407 @raise ValueError: If the SCSI id is not in a valid form. 408 @raise ValueError: If the drive speed is not an integer >= 1. 409 """ 410 if scsiId is not None: 411 logger.warn("SCSI id [%s] will be ignored by DvdWriter.", scsiId) 412 self._image = None # optionally filled in by initializeImage() 413 self._device = validateDevice(device, unittest) 414 self._scsiId = scsiId # not validated, because it's just for reference 415 self._driveSpeed = validateDriveSpeed(driveSpeed) 416 self._media = MediaDefinition(mediaType) 417 self._refreshMediaDelay = refreshMediaDelay 418 self._ejectDelay = ejectDelay 419 if noEject: 420 self._deviceHasTray = False 421 self._deviceCanEject = False 422 else: 423 self._deviceHasTray = True # just assume 424 self._deviceCanEject = True # just assume
    425 426 427 ############# 428 # Properties 429 ############# 430
    431 - def _getDevice(self):
    432 """ 433 Property target used to get the device value. 434 """ 435 return self._device
    436
    437 - def _getScsiId(self):
    438 """ 439 Property target used to get the SCSI id value. 440 """ 441 return self._scsiId
    442
    443 - def _getHardwareId(self):
    444 """ 445 Property target used to get the hardware id value. 446 """ 447 return self._device
    448
    449 - def _getDriveSpeed(self):
    450 """ 451 Property target used to get the drive speed. 452 """ 453 return self._driveSpeed
    454
    455 - def _getMedia(self):
    456 """ 457 Property target used to get the media description. 458 """ 459 return self._media
    460
    461 - def _getDeviceHasTray(self):
    462 """ 463 Property target used to get the device-has-tray flag. 464 """ 465 return self._deviceHasTray
    466
    467 - def _getDeviceCanEject(self):
    468 """ 469 Property target used to get the device-can-eject flag. 470 """ 471 return self._deviceCanEject
    472
    473 - def _getRefreshMediaDelay(self):
    474 """ 475 Property target used to get the configured refresh media delay, in seconds. 476 """ 477 return self._refreshMediaDelay
    478
    479 - def _getEjectDelay(self):
    480 """ 481 Property target used to get the configured eject delay, in seconds. 482 """ 483 return self._ejectDelay
    484 485 device = property(_getDevice, None, None, doc="Filesystem device name for this writer.") 486 scsiId = property(_getScsiId, None, None, doc="SCSI id for the device (saved for reference only).") 487 hardwareId = property(_getHardwareId, None, None, doc="Hardware id for this writer (always the device path).") 488 driveSpeed = property(_getDriveSpeed, None, None, doc="Speed at which the drive writes.") 489 media = property(_getMedia, None, None, doc="Definition of media that is expected to be in the device.") 490 deviceHasTray = property(_getDeviceHasTray, None, None, doc="Indicates whether the device has a media tray.") 491 deviceCanEject = property(_getDeviceCanEject, None, None, doc="Indicates whether the device supports ejecting its media.") 492 refreshMediaDelay = property(_getRefreshMediaDelay, None, None, doc="Refresh media delay, in seconds.") 493 ejectDelay = property(_getEjectDelay, None, None, doc="Eject delay, in seconds.") 494 495 496 ################################################# 497 # Methods related to device and media attributes 498 ################################################# 499
    500 - def isRewritable(self):
    501 """Indicates whether the media is rewritable per configuration.""" 502 return self._media.rewritable
    503
    504 - def retrieveCapacity(self, entireDisc=False):
    505 """ 506 Retrieves capacity for the current media in terms of a C{MediaCapacity} 507 object. 508 509 If C{entireDisc} is passed in as C{True}, the capacity will be for the 510 entire disc, as if it were to be rewritten from scratch. The same will 511 happen if the disc can't be read for some reason. Otherwise, the capacity 512 will be calculated by subtracting the sectors currently used on the disc, 513 as reported by C{growisofs} itself. 514 515 @param entireDisc: Indicates whether to return capacity for entire disc. 516 @type entireDisc: Boolean true/false 517 518 @return: C{MediaCapacity} object describing the capacity of the media. 519 520 @raise ValueError: If there is a problem parsing the C{growisofs} output 521 @raise IOError: If the media could not be read for some reason. 522 """ 523 sectorsUsed = 0.0 524 if not entireDisc: 525 sectorsUsed = self._retrieveSectorsUsed() 526 sectorsAvailable = self._media.capacity - sectorsUsed # both are in sectors 527 bytesUsed = convertSize(sectorsUsed, UNIT_SECTORS, UNIT_BYTES) 528 bytesAvailable = convertSize(sectorsAvailable, UNIT_SECTORS, UNIT_BYTES) 529 return MediaCapacity(bytesUsed, bytesAvailable)
    530 531 532 ####################################################### 533 # Methods used for working with the internal ISO image 534 ####################################################### 535
    536 - def initializeImage(self, newDisc, tmpdir, mediaLabel=None):
    537 """ 538 Initializes the writer's associated ISO image. 539 540 This method initializes the C{image} instance variable so that the caller 541 can use the C{addImageEntry} method. Once entries have been added, the 542 C{writeImage} method can be called with no arguments. 543 544 @param newDisc: Indicates whether the disc should be re-initialized 545 @type newDisc: Boolean true/false 546 547 @param tmpdir: Temporary directory to use if needed 548 @type tmpdir: String representing a directory path on disk 549 550 @param mediaLabel: Media label to be applied to the image, if any 551 @type mediaLabel: String, no more than 25 characters long 552 """ 553 self._image = _ImageProperties() 554 self._image.newDisc = newDisc 555 self._image.tmpdir = encodePath(tmpdir) 556 self._image.mediaLabel = mediaLabel 557 self._image.entries = {} # mapping from path to graft point (if any)
    558
    559 - def addImageEntry(self, path, graftPoint):
    560 """ 561 Adds a filepath entry to the writer's associated ISO image. 562 563 The contents of the filepath -- but not the path itself -- will be added 564 to the image at the indicated graft point. If you don't want to use a 565 graft point, just pass C{None}. 566 567 @note: Before calling this method, you must call L{initializeImage}. 568 569 @param path: File or directory to be added to the image 570 @type path: String representing a path on disk 571 572 @param graftPoint: Graft point to be used when adding this entry 573 @type graftPoint: String representing a graft point path, as described above 574 575 @raise ValueError: If initializeImage() was not previously called 576 @raise ValueError: If the path is not a valid file or directory 577 """ 578 if self._image is None: 579 raise ValueError("Must call initializeImage() before using this method.") 580 if not os.path.exists(path): 581 raise ValueError("Path [%s] does not exist." % path) 582 self._image.entries[path] = graftPoint
    583
    584 - def setImageNewDisc(self, newDisc):
    585 """ 586 Resets (overrides) the newDisc flag on the internal image. 587 @param newDisc: New disc flag to set 588 @raise ValueError: If initializeImage() was not previously called 589 """ 590 if self._image is None: 591 raise ValueError("Must call initializeImage() before using this method.") 592 self._image.newDisc = newDisc
    593
    594 - def getEstimatedImageSize(self):
    595 """ 596 Gets the estimated size of the image associated with the writer. 597 598 This is an estimate and is conservative. The actual image could be as 599 much as 450 blocks (sectors) smaller under some circmstances. 600 601 @return: Estimated size of the image, in bytes. 602 603 @raise IOError: If there is a problem calling C{mkisofs}. 604 @raise ValueError: If initializeImage() was not previously called 605 """ 606 if self._image is None: 607 raise ValueError("Must call initializeImage() before using this method.") 608 return DvdWriter._getEstimatedImageSize(self._image.entries)
    609 610 611 ###################################### 612 # Methods which expose device actions 613 ###################################### 614
    615 - def openTray(self):
    616 """ 617 Opens the device's tray and leaves it open. 618 619 This only works if the device has a tray and supports ejecting its media. 620 We have no way to know if the tray is currently open or closed, so we 621 just send the appropriate command and hope for the best. If the device 622 does not have a tray or does not support ejecting its media, then we do 623 nothing. 624 625 Starting with Debian wheezy on my backup hardware, I started seeing 626 consistent problems with the eject command. I couldn't tell whether 627 these problems were due to the device management system or to the new 628 kernel (3.2.0). Initially, I saw simple eject failures, possibly because 629 I was opening and closing the tray too quickly. I worked around that 630 behavior with the new ejectDelay flag. 631 632 Later, I sometimes ran into issues after writing an image to a disc: 633 eject would give errors like "unable to eject, last error: Inappropriate 634 ioctl for device". Various sources online (like Ubuntu bug #875543) 635 suggested that the drive was being locked somehow, and that the 636 workaround was to run 'eject -i off' to unlock it. Sure enough, that 637 fixed the problem for me, so now it's a normal error-handling strategy. 638 639 @raise IOError: If there is an error talking to the device. 640 """ 641 if self._deviceHasTray and self._deviceCanEject: 642 command = resolveCommand(EJECT_COMMAND) 643 args = [ self.device, ] 644 result = executeCommand(command, args)[0] 645 if result != 0: 646 logger.debug("Eject failed; attempting kludge of unlocking the tray before retrying.") 647 self.unlockTray() 648 result = executeCommand(command, args)[0] 649 if result != 0: 650 raise IOError("Error (%d) executing eject command to open tray (failed even after unlocking tray)." % result) 651 logger.debug("Kludge was apparently successful.") 652 if self.ejectDelay is not None: 653 logger.debug("Per configuration, sleeping %d seconds after opening tray.", self.ejectDelay) 654 time.sleep(self.ejectDelay)
    655
    656 - def unlockTray(self):
    657 """ 658 Unlocks the device's tray via 'eject -i off'. 659 @raise IOError: If there is an error talking to the device. 660 """ 661 command = resolveCommand(EJECT_COMMAND) 662 args = [ "-i", "off", self.device, ] 663 result = executeCommand(command, args)[0] 664 if result != 0: 665 raise IOError("Error (%d) executing eject command to unlock tray." % result)
    666
    667 - def closeTray(self):
    668 """ 669 Closes the device's tray. 670 671 This only works if the device has a tray and supports ejecting its media. 672 We have no way to know if the tray is currently open or closed, so we 673 just send the appropriate command and hope for the best. If the device 674 does not have a tray or does not support ejecting its media, then we do 675 nothing. 676 677 @raise IOError: If there is an error talking to the device. 678 """ 679 if self._deviceHasTray and self._deviceCanEject: 680 command = resolveCommand(EJECT_COMMAND) 681 args = [ "-t", self.device, ] 682 result = executeCommand(command, args)[0] 683 if result != 0: 684 raise IOError("Error (%d) executing eject command to close tray." % result)
    685
    686 - def refreshMedia(self):
    687 """ 688 Opens and then immediately closes the device's tray, to refresh the 689 device's idea of the media. 690 691 Sometimes, a device gets confused about the state of its media. Often, 692 all it takes to solve the problem is to eject the media and then 693 immediately reload it. (There are also configurable eject and refresh 694 media delays which can be applied, for situations where this makes a 695 difference.) 696 697 This only works if the device has a tray and supports ejecting its media. 698 We have no way to know if the tray is currently open or closed, so we 699 just send the appropriate command and hope for the best. If the device 700 does not have a tray or does not support ejecting its media, then we do 701 nothing. The configured delays still apply, though. 702 703 @raise IOError: If there is an error talking to the device. 704 """ 705 self.openTray() 706 self.closeTray() 707 self.unlockTray() # on some systems, writing a disc leaves the tray locked, yikes! 708 if self.refreshMediaDelay is not None: 709 logger.debug("Per configuration, sleeping %d seconds to stabilize media state.", self.refreshMediaDelay) 710 time.sleep(self.refreshMediaDelay) 711 logger.debug("Media refresh complete; hopefully media state is stable now.")
    712
    713 - def writeImage(self, imagePath=None, newDisc=False, writeMulti=True):
    714 """ 715 Writes an ISO image to the media in the device. 716 717 If C{newDisc} is passed in as C{True}, we assume that the entire disc 718 will be re-created from scratch. Note that unlike C{CdWriter}, 719 C{DvdWriter} does not blank rewritable media before reusing it; however, 720 C{growisofs} is called such that the media will be re-initialized as 721 needed. 722 723 If C{imagePath} is passed in as C{None}, then the existing image 724 configured with C{initializeImage()} will be used. Under these 725 circumstances, the passed-in C{newDisc} flag will be ignored and the 726 value passed in to C{initializeImage()} will apply instead. 727 728 The C{writeMulti} argument is ignored. It exists for compatibility with 729 the Cedar Backup image writer interface. 730 731 @note: The image size indicated in the log ("Image size will be...") is 732 an estimate. The estimate is conservative and is probably larger than 733 the actual space that C{dvdwriter} will use. 734 735 @param imagePath: Path to an ISO image on disk, or C{None} to use writer's image 736 @type imagePath: String representing a path on disk 737 738 @param newDisc: Indicates whether the disc should be re-initialized 739 @type newDisc: Boolean true/false. 740 741 @param writeMulti: Unused 742 @type writeMulti: Boolean true/false 743 744 @raise ValueError: If the image path is not absolute. 745 @raise ValueError: If some path cannot be encoded properly. 746 @raise IOError: If the media could not be written to for some reason. 747 @raise ValueError: If no image is passed in and initializeImage() was not previously called 748 """ 749 if not writeMulti: 750 logger.warn("writeMulti value of [%s] ignored.", writeMulti) 751 if imagePath is None: 752 if self._image is None: 753 raise ValueError("Must call initializeImage() before using this method with no image path.") 754 size = self.getEstimatedImageSize() 755 logger.info("Image size will be %s (estimated).", displayBytes(size)) 756 available = self.retrieveCapacity(entireDisc=self._image.newDisc).bytesAvailable 757 if size > available: 758 logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) 759 raise IOError("Media does not contain enough capacity to store image.") 760 self._writeImage(self._image.newDisc, None, self._image.entries, self._image.mediaLabel) 761 else: 762 if not os.path.isabs(imagePath): 763 raise ValueError("Image path must be absolute.") 764 imagePath = encodePath(imagePath) 765 self._writeImage(newDisc, imagePath, None)
    766 767 768 ################################################################## 769 # Utility methods for dealing with growisofs and dvd+rw-mediainfo 770 ################################################################## 771
    772 - def _writeImage(self, newDisc, imagePath, entries, mediaLabel=None):
    773 """ 774 Writes an image to disc using either an entries list or an ISO image on 775 disk. 776 777 Callers are assumed to have done validation on paths, etc. before calling 778 this method. 779 780 @param newDisc: Indicates whether the disc should be re-initialized 781 @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} 782 @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} 783 784 @raise IOError: If the media could not be written to for some reason. 785 """ 786 command = resolveCommand(GROWISOFS_COMMAND) 787 args = DvdWriter._buildWriteArgs(newDisc, self.hardwareId, self._driveSpeed, imagePath, entries, mediaLabel, dryRun=False) 788 (result, output) = executeCommand(command, args, returnOutput=True) 789 if result != 0: 790 DvdWriter._searchForOverburn(output) # throws own exception if overburn condition is found 791 raise IOError("Error (%d) executing command to write disc." % result) 792 self.refreshMedia()
    793 794 @staticmethod
    795 - def _getEstimatedImageSize(entries):
    796 """ 797 Gets the estimated size of a set of image entries. 798 799 This is implemented in terms of the C{IsoImage} class. The returned 800 value is calculated by adding a "fudge factor" to the value from 801 C{IsoImage}. This fudge factor was determined by experimentation and is 802 conservative -- the actual image could be as much as 450 blocks smaller 803 under some circumstances. 804 805 @param entries: Dictionary mapping path to graft point. 806 807 @return: Total estimated size of image, in bytes. 808 809 @raise ValueError: If there are no entries in the dictionary 810 @raise ValueError: If any path in the dictionary does not exist 811 @raise IOError: If there is a problem calling C{mkisofs}. 812 """ 813 fudgeFactor = convertSize(2500.0, UNIT_SECTORS, UNIT_BYTES) # determined through experimentation 814 if len(entries.keys()) == 0: 815 raise ValueError("Must add at least one entry with addImageEntry().") 816 image = IsoImage() 817 for path in entries.keys(): 818 image.addEntry(path, entries[path], override=False, contentsOnly=True) 819 estimatedSize = image.getEstimatedSize() + fudgeFactor 820 return estimatedSize
    821
    822 - def _retrieveSectorsUsed(self):
    823 """ 824 Retrieves the number of sectors used on the current media. 825 826 This is a little ugly. We need to call growisofs in "dry-run" mode and 827 parse some information from its output. However, to do that, we need to 828 create a dummy file that we can pass to the command -- and we have to 829 make sure to remove it later. 830 831 Once growisofs has been run, then we call C{_parseSectorsUsed} to parse 832 the output and calculate the number of sectors used on the media. 833 834 @return: Number of sectors used on the media 835 """ 836 tempdir = tempfile.mkdtemp() 837 try: 838 entries = { tempdir: None } 839 args = DvdWriter._buildWriteArgs(False, self.hardwareId, self.driveSpeed, None, entries, None, dryRun=True) 840 command = resolveCommand(GROWISOFS_COMMAND) 841 (result, output) = executeCommand(command, args, returnOutput=True) 842 if result != 0: 843 logger.debug("Error (%d) calling growisofs to read sectors used.", result) 844 logger.warn("Unable to read disc (might not be initialized); returning zero sectors used.") 845 return 0.0 846 sectorsUsed = DvdWriter._parseSectorsUsed(output) 847 logger.debug("Determined sectors used as %s", sectorsUsed) 848 return sectorsUsed 849 finally: 850 if os.path.exists(tempdir): 851 try: 852 os.rmdir(tempdir) 853 except: pass
    854 855 @staticmethod
    856 - def _parseSectorsUsed(output):
    857 """ 858 Parse sectors used information out of C{growisofs} output. 859 860 The first line of a growisofs run looks something like this:: 861 862 Executing 'mkisofs -C 973744,1401056 -M /dev/fd/3 -r -graft-points music4/=music | builtin_dd of=/dev/cdrom obs=32k seek=87566' 863 864 Dmitry has determined that the seek value in this line gives us 865 information about how much data has previously been written to the media. 866 That value multiplied by 16 yields the number of sectors used. 867 868 If the seek line cannot be found in the output, then sectors used of zero 869 is assumed. 870 871 @return: Sectors used on the media, as a floating point number. 872 873 @raise ValueError: If the output cannot be parsed properly. 874 """ 875 if output is not None: 876 pattern = re.compile(r"(^)(.*)(seek=)(.*)('$)") 877 for line in output: 878 match = pattern.search(line) 879 if match is not None: 880 try: 881 return float(match.group(4).strip()) * 16.0 882 except ValueError: 883 raise ValueError("Unable to parse sectors used out of growisofs output.") 884 logger.warn("Unable to read disc (might not be initialized); returning zero sectors used.") 885 return 0.0
    886 887 @staticmethod
    888 - def _searchForOverburn(output):
    889 """ 890 Search for an "overburn" error message in C{growisofs} output. 891 892 The C{growisofs} command returns a non-zero exit code and puts a message 893 into the output -- even on a dry run -- if there is not enough space on 894 the media. This is called an "overburn" condition. 895 896 The error message looks like this:: 897 898 :-( /dev/cdrom: 894048 blocks are free, 2033746 to be written! 899 900 This method looks for the overburn error message anywhere in the output. 901 If a matching error message is found, an C{IOError} exception is raised 902 containing relevant information about the problem. Otherwise, the method 903 call returns normally. 904 905 @param output: List of output lines to search, as from C{executeCommand} 906 907 @raise IOError: If an overburn condition is found. 908 """ 909 if output is None: 910 return 911 pattern = re.compile(r"(^)(:-[(])(\s*.*:\s*)(.* )(blocks are free, )(.* )(to be written!)") 912 for line in output: 913 match = pattern.search(line) 914 if match is not None: 915 try: 916 available = convertSize(float(match.group(4).strip()), UNIT_SECTORS, UNIT_BYTES) 917 size = convertSize(float(match.group(6).strip()), UNIT_SECTORS, UNIT_BYTES) 918 logger.error("Image [%s] does not fit in available capacity [%s].", displayBytes(size), displayBytes(available)) 919 except ValueError: 920 logger.error("Image does not fit in available capacity (no useful capacity info available).") 921 raise IOError("Media does not contain enough capacity to store image.")
    922 923 @staticmethod
    924 - def _buildWriteArgs(newDisc, hardwareId, driveSpeed, imagePath, entries, mediaLabel=None, dryRun=False):
    925 """ 926 Builds a list of arguments to be passed to a C{growisofs} command. 927 928 The arguments will either cause C{growisofs} to write the indicated image 929 file to disc, or will pass C{growisofs} a list of directories or files 930 that should be written to disc. 931 932 If a new image is created, it will always be created with Rock Ridge 933 extensions (-r). A volume name will be applied (-V) if C{mediaLabel} is 934 not C{None}. 935 936 @param newDisc: Indicates whether the disc should be re-initialized 937 @param hardwareId: Hardware id for the device 938 @param driveSpeed: Speed at which the drive writes. 939 @param imagePath: Path to an ISO image on disk, or c{None} to use C{entries} 940 @param entries: Mapping from path to graft point, or C{None} to use C{imagePath} 941 @param mediaLabel: Media label to set on the image, if any 942 @param dryRun: Says whether to make this a dry run (for checking capacity) 943 944 @note: If we write an existing image to disc, then the mediaLabel is 945 ignored. The media label is an attribute of the image, and should be set 946 on the image when it is created. 947 948 @note: We always pass the undocumented option C{-use-the-force-like=tty} 949 to growisofs. Without this option, growisofs will refuse to execute 950 certain actions when running from cron. A good example is -Z, which 951 happily overwrites an existing DVD from the command-line, but fails when 952 run from cron. It took a while to figure that out, since it worked every 953 time I tested it by hand. :( 954 955 @return: List suitable for passing to L{util.executeCommand} as C{args}. 956 957 @raise ValueError: If caller does not pass one or the other of imagePath or entries. 958 """ 959 args = [] 960 if (imagePath is None and entries is None) or (imagePath is not None and entries is not None): 961 raise ValueError("Must use either imagePath or entries.") 962 args.append("-use-the-force-luke=tty") # tell growisofs to let us run from cron 963 if dryRun: 964 args.append("-dry-run") 965 if driveSpeed is not None: 966 args.append("-speed=%d" % driveSpeed) 967 if newDisc: 968 args.append("-Z") 969 else: 970 args.append("-M") 971 if imagePath is not None: 972 args.append("%s=%s" % (hardwareId, imagePath)) 973 else: 974 args.append(hardwareId) 975 if mediaLabel is not None: 976 args.append("-V") 977 args.append(mediaLabel) 978 args.append("-r") # Rock Ridge extensions with sane ownership and permissions 979 args.append("-graft-points") 980 keys = entries.keys() 981 keys.sort() # just so we get consistent results 982 for key in keys: 983 # Same syntax as when calling mkisofs in IsoImage 984 if entries[key] is None: 985 args.append(key) 986 else: 987 args.append("%s/=%s" % (entries[key].strip("/"), key)) 988 return args
    989

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.purge-module.html0000664000175000017500000002315212642035643027452 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.purge
    Package CedarBackup2 :: Package actions :: Module purge
    [hide private]
    [frames] | no frames]

    Module purge

    source code

    Implements the standard 'purge' action.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    executePurge(configPath, options, config)
    Executes the purge backup action.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.actions.purge")
      __package__ = 'CedarBackup2.actions'
    Function Details [hide private]

    executePurge(configPath, options, config)

    source code 

    Executes the purge backup action.

    For each configured directory, we create a purge item list, remove from the list anything that's younger than the configured retain days value, and then purge from the filesystem what's left.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.RemotePeer-class.html0000664000175000017500000015016412642035644030031 0ustar pronovicpronovic00000000000000 CedarBackup2.config.RemotePeer
    Package CedarBackup2 :: Module config :: Class RemotePeer
    [hide private]
    [frames] | no frames]

    Class RemotePeer

    source code

    object --+
             |
            RemotePeer
    

    Class representing a Cedar Backup peer.

    The following restrictions exist on data in this class:

    • The peer name must be a non-empty string.
    • The collect directory must be an absolute path.
    • The remote user must be a non-empty string.
    • The rcp command must be a non-empty string.
    • The rsh command must be a non-empty string.
    • The cback command must be a non-empty string.
    • Any managed action name must be a non-empty string matching ACTION_NAME_REGEX
    • The ignore failure mode must be one of the values in VALID_FAILURE_MODES.
    Instance Methods [hide private]
     
    __init__(self, name=None, collectDir=None, remoteUser=None, rcpCommand=None, rshCommand=None, cbackCommand=None, managed=False, managedActions=None, ignoreFailureMode=None)
    Constructor for the RemotePeer class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setRemoteUser(self, value)
    Property target used to set the remote user.
    source code
     
    _getRemoteUser(self)
    Property target used to get the remote user.
    source code
     
    _setRcpCommand(self, value)
    Property target used to set the rcp command.
    source code
     
    _getRcpCommand(self)
    Property target used to get the rcp command.
    source code
     
    _setRshCommand(self, value)
    Property target used to set the rsh command.
    source code
     
    _getRshCommand(self)
    Property target used to get the rsh command.
    source code
     
    _setCbackCommand(self, value)
    Property target used to set the cback command.
    source code
     
    _getCbackCommand(self)
    Property target used to get the cback command.
    source code
     
    _setManaged(self, value)
    Property target used to set the managed flag.
    source code
     
    _getManaged(self)
    Property target used to get the managed flag.
    source code
     
    _setManagedActions(self, value)
    Property target used to set the managed actions list.
    source code
     
    _getManagedActions(self)
    Property target used to get the managed actions list.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      name
    Name of the peer, must be a valid hostname.
      collectDir
    Collect directory to stage files from on peer.
      remoteUser
    Name of backup user on remote peer.
      rcpCommand
    Overridden rcp-compatible copy command for peer.
      rshCommand
    Overridden rsh-compatible remote shell command for peer.
      cbackCommand
    Overridden cback-compatible command to use on remote peer.
      managed
    Indicates whether this is a managed peer.
      managedActions
    Overridden set of actions that are managed on the peer.
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, collectDir=None, remoteUser=None, rcpCommand=None, rshCommand=None, cbackCommand=None, managed=False, managedActions=None, ignoreFailureMode=None)
    (Constructor)

    source code 

    Constructor for the RemotePeer class.

    Parameters:
    • name - Name of the peer, must be a valid hostname.
    • collectDir - Collect directory to stage files from on peer.
    • remoteUser - Name of backup user on remote peer.
    • rcpCommand - Overridden rcp-compatible copy command for peer.
    • rshCommand - Overridden rsh-compatible remote shell command for peer.
    • cbackCommand - Overridden cback-compatible command to use on remote peer.
    • managed - Indicates whether this is a managed peer.
    • managedActions - Overridden set of actions that are managed on the peer.
    • ignoreFailureMode - Ignore failure mode for peer.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setRemoteUser(self, value)

    source code 

    Property target used to set the remote user. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRcpCommand(self, value)

    source code 

    Property target used to set the rcp command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRshCommand(self, value)

    source code 

    Property target used to set the rsh command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setCbackCommand(self, value)

    source code 

    Property target used to set the cback command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setManaged(self, value)

    source code 

    Property target used to set the managed flag. No validations, but we normalize the value to True or False.

    _setManagedActions(self, value)

    source code 

    Property target used to set the managed actions list. Elements do not have to exist on disk at the time of assignment.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    name

    Name of the peer, must be a valid hostname.

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Collect directory to stage files from on peer.

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    remoteUser

    Name of backup user on remote peer.

    Get Method:
    _getRemoteUser(self) - Property target used to get the remote user.
    Set Method:
    _setRemoteUser(self, value) - Property target used to set the remote user.

    rcpCommand

    Overridden rcp-compatible copy command for peer.

    Get Method:
    _getRcpCommand(self) - Property target used to get the rcp command.
    Set Method:
    _setRcpCommand(self, value) - Property target used to set the rcp command.

    rshCommand

    Overridden rsh-compatible remote shell command for peer.

    Get Method:
    _getRshCommand(self) - Property target used to get the rsh command.
    Set Method:
    _setRshCommand(self, value) - Property target used to set the rsh command.

    cbackCommand

    Overridden cback-compatible command to use on remote peer.

    Get Method:
    _getCbackCommand(self) - Property target used to get the cback command.
    Set Method:
    _setCbackCommand(self, value) - Property target used to set the cback command.

    managed

    Indicates whether this is a managed peer.

    Get Method:
    _getManaged(self) - Property target used to get the managed flag.
    Set Method:
    _setManaged(self, value) - Property target used to set the managed flag.

    managedActions

    Overridden set of actions that are managed on the peer.

    Get Method:
    _getManagedActions(self) - Property target used to get the managed actions list.
    Set Method:
    _setManagedActions(self, value) - Property target used to set the managed actions list.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.capacity-pysrc.html0000664000175000017500000055064312642035644027642 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity
    Package CedarBackup2 :: Package extend :: Module capacity
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.capacity

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2008,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Provides an extension to check remaining media capacity. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to check remaining media capacity. 
     40   
     41  Some users have asked for advance warning that their media is beginning to fill 
     42  up.  This is an extension that checks the current capacity of the media in the 
     43  writer, and prints a warning if the media is more than X% full, or has fewer 
     44  than X bytes of capacity remaining. 
     45   
     46  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     47  """ 
     48   
     49  ######################################################################## 
     50  # Imported modules 
     51  ######################################################################## 
     52   
     53  # System modules 
     54  import logging 
     55   
     56  # Cedar Backup modules 
     57  from CedarBackup2.util import displayBytes 
     58  from CedarBackup2.config import ByteQuantity, readByteQuantity, addByteQuantityNode 
     59  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode 
     60  from CedarBackup2.xmlutil import readFirstChild, readString 
     61  from CedarBackup2.actions.util import createWriter, checkMediaState 
     62   
     63   
     64  ######################################################################## 
     65  # Module-wide constants and variables 
     66  ######################################################################## 
     67   
     68  logger = logging.getLogger("CedarBackup2.log.extend.capacity") 
    
    69 70 71 ######################################################################## 72 # Percentage class definition 73 ######################################################################## 74 75 -class PercentageQuantity(object):
    76 77 """ 78 Class representing a percentage quantity. 79 80 The percentage is maintained internally as a string so that issues of 81 precision can be avoided. It really isn't possible to store a floating 82 point number here while being able to losslessly translate back and forth 83 between XML and object representations. (Perhaps the Python 2.4 Decimal 84 class would have been an option, but I originally wanted to stay compatible 85 with Python 2.3.) 86 87 Even though the quantity is maintained as a string, the string must be in a 88 valid floating point positive number. Technically, any floating point 89 string format supported by Python is allowble. However, it does not make 90 sense to have a negative percentage in this context. 91 92 @sort: __init__, __repr__, __str__, __cmp__, quantity 93 """ 94
    95 - def __init__(self, quantity=None):
    96 """ 97 Constructor for the C{PercentageQuantity} class. 98 @param quantity: Percentage quantity, as a string (i.e. "99.9" or "12") 99 @raise ValueError: If the quantity value is invaid. 100 """ 101 self._quantity = None 102 self.quantity = quantity
    103
    104 - def __repr__(self):
    105 """ 106 Official string representation for class instance. 107 """ 108 return "PercentageQuantity(%s)" % (self.quantity)
    109
    110 - def __str__(self):
    111 """ 112 Informal string representation for class instance. 113 """ 114 return self.__repr__()
    115
    116 - def __cmp__(self, other):
    117 """ 118 Definition of equals operator for this class. 119 Lists within this class are "unordered" for equality comparisons. 120 @param other: Other object to compare to. 121 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 122 """ 123 if other is None: 124 return 1 125 if self.quantity != other.quantity: 126 if self.quantity < other.quantity: 127 return -1 128 else: 129 return 1 130 return 0
    131
    132 - def _setQuantity(self, value):
    133 """ 134 Property target used to set the quantity 135 The value must be a non-empty string if it is not C{None}. 136 @raise ValueError: If the value is an empty string. 137 @raise ValueError: If the value is not a valid floating point number 138 @raise ValueError: If the value is less than zero 139 """ 140 if value is not None: 141 if len(value) < 1: 142 raise ValueError("Percentage must be a non-empty string.") 143 floatValue = float(value) 144 if floatValue < 0.0 or floatValue > 100.0: 145 raise ValueError("Percentage must be a positive value from 0.0 to 100.0") 146 self._quantity = value # keep around string
    147
    148 - def _getQuantity(self):
    149 """ 150 Property target used to get the quantity. 151 """ 152 return self._quantity
    153
    154 - def _getPercentage(self):
    155 """ 156 Property target used to get the quantity as a floating point number. 157 If there is no quantity set, then a value of 0.0 is returned. 158 """ 159 if self.quantity is not None: 160 return float(self.quantity) 161 return 0.0
    162 163 quantity = property(_getQuantity, _setQuantity, None, doc="Percentage value, as a string") 164 percentage = property(_getPercentage, None, None, "Percentage value, as a floating point number.")
    165
    166 167 ######################################################################## 168 # CapacityConfig class definition 169 ######################################################################## 170 171 -class CapacityConfig(object):
    172 173 """ 174 Class representing capacity configuration. 175 176 The following restrictions exist on data in this class: 177 178 - The maximum percentage utilized must be a PercentageQuantity 179 - The minimum bytes remaining must be a ByteQuantity 180 181 @sort: __init__, __repr__, __str__, __cmp__, maxPercentage, minBytes 182 """ 183
    184 - def __init__(self, maxPercentage=None, minBytes=None):
    185 """ 186 Constructor for the C{CapacityConfig} class. 187 188 @param maxPercentage: Maximum percentage of the media that may be utilized 189 @param minBytes: Minimum number of free bytes that must be available 190 """ 191 self._maxPercentage = None 192 self._minBytes = None 193 self.maxPercentage = maxPercentage 194 self.minBytes = minBytes
    195
    196 - def __repr__(self):
    197 """ 198 Official string representation for class instance. 199 """ 200 return "CapacityConfig(%s, %s)" % (self.maxPercentage, self.minBytes)
    201
    202 - def __str__(self):
    203 """ 204 Informal string representation for class instance. 205 """ 206 return self.__repr__()
    207
    208 - def __cmp__(self, other):
    209 """ 210 Definition of equals operator for this class. 211 @param other: Other object to compare to. 212 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 213 """ 214 if other is None: 215 return 1 216 if self.maxPercentage != other.maxPercentage: 217 if self.maxPercentage < other.maxPercentage: 218 return -1 219 else: 220 return 1 221 if self.minBytes != other.minBytes: 222 if self.minBytes < other.minBytes: 223 return -1 224 else: 225 return 1 226 return 0
    227
    228 - def _setMaxPercentage(self, value):
    229 """ 230 Property target used to set the maxPercentage value. 231 If not C{None}, the value must be a C{PercentageQuantity} object. 232 @raise ValueError: If the value is not a C{PercentageQuantity} 233 """ 234 if value is None: 235 self._maxPercentage = None 236 else: 237 if not isinstance(value, PercentageQuantity): 238 raise ValueError("Value must be a C{PercentageQuantity} object.") 239 self._maxPercentage = value
    240
    241 - def _getMaxPercentage(self):
    242 """ 243 Property target used to get the maxPercentage value 244 """ 245 return self._maxPercentage
    246
    247 - def _setMinBytes(self, value):
    248 """ 249 Property target used to set the bytes utilized value. 250 If not C{None}, the value must be a C{ByteQuantity} object. 251 @raise ValueError: If the value is not a C{ByteQuantity} 252 """ 253 if value is None: 254 self._minBytes = None 255 else: 256 if not isinstance(value, ByteQuantity): 257 raise ValueError("Value must be a C{ByteQuantity} object.") 258 self._minBytes = value
    259
    260 - def _getMinBytes(self):
    261 """ 262 Property target used to get the bytes remaining value. 263 """ 264 return self._minBytes
    265 266 maxPercentage = property(_getMaxPercentage, _setMaxPercentage, None, "Maximum percentage of the media that may be utilized.") 267 minBytes = property(_getMinBytes, _setMinBytes, None, "Minimum number of free bytes that must be available.")
    268
    269 270 ######################################################################## 271 # LocalConfig class definition 272 ######################################################################## 273 274 -class LocalConfig(object):
    275 276 """ 277 Class representing this extension's configuration document. 278 279 This is not a general-purpose configuration object like the main Cedar 280 Backup configuration object. Instead, it just knows how to parse and emit 281 specific configuration values to this extension. Third parties who need to 282 read and write configuration related to this extension should access it 283 through the constructor, C{validate} and C{addConfig} methods. 284 285 @note: Lists within this class are "unordered" for equality comparisons. 286 287 @sort: __init__, __repr__, __str__, __cmp__, capacity, validate, addConfig 288 """ 289
    290 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    291 """ 292 Initializes a configuration object. 293 294 If you initialize the object without passing either C{xmlData} or 295 C{xmlPath} then configuration will be empty and will be invalid until it 296 is filled in properly. 297 298 No reference to the original XML data or original path is saved off by 299 this class. Once the data has been parsed (successfully or not) this 300 original information is discarded. 301 302 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 303 method will be called (with its default arguments) against configuration 304 after successfully parsing any passed-in XML. Keep in mind that even if 305 C{validate} is C{False}, it might not be possible to parse the passed-in 306 XML document if lower-level validations fail. 307 308 @note: It is strongly suggested that the C{validate} option always be set 309 to C{True} (the default) unless there is a specific need to read in 310 invalid configuration from disk. 311 312 @param xmlData: XML data representing configuration. 313 @type xmlData: String data. 314 315 @param xmlPath: Path to an XML file on disk. 316 @type xmlPath: Absolute path to a file on disk. 317 318 @param validate: Validate the document after parsing it. 319 @type validate: Boolean true/false. 320 321 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 322 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 323 @raise ValueError: If the parsed configuration document is not valid. 324 """ 325 self._capacity = None 326 self.capacity = None 327 if xmlData is not None and xmlPath is not None: 328 raise ValueError("Use either xmlData or xmlPath, but not both.") 329 if xmlData is not None: 330 self._parseXmlData(xmlData) 331 if validate: 332 self.validate() 333 elif xmlPath is not None: 334 xmlData = open(xmlPath).read() 335 self._parseXmlData(xmlData) 336 if validate: 337 self.validate()
    338
    339 - def __repr__(self):
    340 """ 341 Official string representation for class instance. 342 """ 343 return "LocalConfig(%s)" % (self.capacity)
    344
    345 - def __str__(self):
    346 """ 347 Informal string representation for class instance. 348 """ 349 return self.__repr__()
    350
    351 - def __cmp__(self, other):
    352 """ 353 Definition of equals operator for this class. 354 Lists within this class are "unordered" for equality comparisons. 355 @param other: Other object to compare to. 356 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 357 """ 358 if other is None: 359 return 1 360 if self.capacity != other.capacity: 361 if self.capacity < other.capacity: 362 return -1 363 else: 364 return 1 365 return 0
    366
    367 - def _setCapacity(self, value):
    368 """ 369 Property target used to set the capacity configuration value. 370 If not C{None}, the value must be a C{CapacityConfig} object. 371 @raise ValueError: If the value is not a C{CapacityConfig} 372 """ 373 if value is None: 374 self._capacity = None 375 else: 376 if not isinstance(value, CapacityConfig): 377 raise ValueError("Value must be a C{CapacityConfig} object.") 378 self._capacity = value
    379
    380 - def _getCapacity(self):
    381 """ 382 Property target used to get the capacity configuration value. 383 """ 384 return self._capacity
    385 386 capacity = property(_getCapacity, _setCapacity, None, "Capacity configuration in terms of a C{CapacityConfig} object.") 387
    388 - def validate(self):
    389 """ 390 Validates configuration represented by the object. 391 THere must be either a percentage, or a byte capacity, but not both. 392 @raise ValueError: If one of the validations fails. 393 """ 394 if self.capacity is None: 395 raise ValueError("Capacity section is required.") 396 if self.capacity.maxPercentage is None and self.capacity.minBytes is None: 397 raise ValueError("Must provide either max percentage or min bytes.") 398 if self.capacity.maxPercentage is not None and self.capacity.minBytes is not None: 399 raise ValueError("Must provide either max percentage or min bytes, but not both.")
    400
    401 - def addConfig(self, xmlDom, parentNode):
    402 """ 403 Adds a <capacity> configuration section as the next child of a parent. 404 405 Third parties should use this function to write configuration related to 406 this extension. 407 408 We add the following fields to the document:: 409 410 maxPercentage //cb_config/capacity/max_percentage 411 minBytes //cb_config/capacity/min_bytes 412 413 @param xmlDom: DOM tree as from C{impl.createDocument()}. 414 @param parentNode: Parent that the section should be appended to. 415 """ 416 if self.capacity is not None: 417 sectionNode = addContainerNode(xmlDom, parentNode, "capacity") 418 LocalConfig._addPercentageQuantity(xmlDom, sectionNode, "max_percentage", self.capacity.maxPercentage) 419 if self.capacity.minBytes is not None: # because utility function fills in empty section on None 420 addByteQuantityNode(xmlDom, sectionNode, "min_bytes", self.capacity.minBytes)
    421
    422 - def _parseXmlData(self, xmlData):
    423 """ 424 Internal method to parse an XML string into the object. 425 426 This method parses the XML document into a DOM tree (C{xmlDom}) and then 427 calls a static method to parse the capacity configuration section. 428 429 @param xmlData: XML data to be parsed 430 @type xmlData: String data 431 432 @raise ValueError: If the XML cannot be successfully parsed. 433 """ 434 (xmlDom, parentNode) = createInputDom(xmlData) 435 self._capacity = LocalConfig._parseCapacity(parentNode)
    436 437 @staticmethod
    438 - def _parseCapacity(parentNode):
    439 """ 440 Parses a capacity configuration section. 441 442 We read the following fields:: 443 444 maxPercentage //cb_config/capacity/max_percentage 445 minBytes //cb_config/capacity/min_bytes 446 447 @param parentNode: Parent node to search beneath. 448 449 @return: C{CapacityConfig} object or C{None} if the section does not exist. 450 @raise ValueError: If some filled-in value is invalid. 451 """ 452 capacity = None 453 section = readFirstChild(parentNode, "capacity") 454 if section is not None: 455 capacity = CapacityConfig() 456 capacity.maxPercentage = LocalConfig._readPercentageQuantity(section, "max_percentage") 457 capacity.minBytes = readByteQuantity(section, "min_bytes") 458 return capacity
    459 460 @staticmethod
    461 - def _readPercentageQuantity(parent, name):
    462 """ 463 Read a percentage quantity value from an XML document. 464 @param parent: Parent node to search beneath. 465 @param name: Name of node to search for. 466 @return: Percentage quantity parsed from XML document 467 """ 468 quantity = readString(parent, name) 469 if quantity is None: 470 return None 471 return PercentageQuantity(quantity)
    472 473 @staticmethod
    474 - def _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity):
    475 """ 476 Adds a text node as the next child of a parent, to contain a percentage quantity. 477 478 If the C{percentageQuantity} is None, then no node will be created. 479 480 @param xmlDom: DOM tree as from C{impl.createDocument()}. 481 @param parentNode: Parent node to create child for. 482 @param nodeName: Name of the new container node. 483 @param percentageQuantity: PercentageQuantity object to put into the XML document 484 485 @return: Reference to the newly-created node. 486 """ 487 if percentageQuantity is not None: 488 addStringNode(xmlDom, parentNode, nodeName, percentageQuantity.quantity)
    489
    490 491 ######################################################################## 492 # Public functions 493 ######################################################################## 494 495 ########################### 496 # executeAction() function 497 ########################### 498 499 -def executeAction(configPath, options, config):
    500 """ 501 Executes the capacity action. 502 503 @param configPath: Path to configuration file on disk. 504 @type configPath: String representing a path on disk. 505 506 @param options: Program command-line options. 507 @type options: Options object. 508 509 @param config: Program configuration. 510 @type config: Config object. 511 512 @raise ValueError: Under many generic error conditions 513 @raise IOError: If there are I/O problems reading or writing files 514 """ 515 logger.debug("Executing capacity extended action.") 516 if config.options is None or config.store is None: 517 raise ValueError("Cedar Backup configuration is not properly filled in.") 518 local = LocalConfig(xmlPath=configPath) 519 if config.store.checkMedia: 520 checkMediaState(config.store) # raises exception if media is not initialized 521 capacity = createWriter(config).retrieveCapacity() 522 logger.debug("Media capacity: %s", capacity) 523 if local.capacity.maxPercentage is not None: 524 if capacity.utilized > local.capacity.maxPercentage.percentage: 525 logger.error("Media has reached capacity limit of %s%%: %.2f%% utilized", 526 local.capacity.maxPercentage.quantity, capacity.utilized) 527 else: 528 if capacity.bytesAvailable < local.capacity.minBytes: 529 logger.error("Media has reached capacity limit of %s: only %s available", 530 local.capacity.minBytes, displayBytes(capacity.bytesAvailable)) 531 logger.info("Executed the capacity extended action successfully.")
    532

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.encrypt-pysrc.html0000664000175000017500000052163412642035644027527 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.encrypt
    Package CedarBackup2 :: Package extend :: Module encrypt
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.encrypt

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : Provides an extension to encrypt staging directories. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to encrypt staging directories. 
     40   
     41  When this extension is executed, all backed-up files in the configured Cedar 
     42  Backup staging directory will be encrypted using gpg.  Any directory which has 
     43  already been encrypted (as indicated by the C{cback.encrypt} file) will be 
     44  ignored. 
     45   
     46  This extension requires a new configuration section <encrypt> and is intended 
     47  to be run immediately after the standard stage action or immediately before the 
     48  standard store action.  Aside from its own configuration, it requires the 
     49  options and staging configuration sections in the standard Cedar Backup 
     50  configuration file. 
     51   
     52  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     53  """ 
     54   
     55  ######################################################################## 
     56  # Imported modules 
     57  ######################################################################## 
     58   
     59  # System modules 
     60  import os 
     61  import logging 
     62   
     63  # Cedar Backup modules 
     64  from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership 
     65  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode 
     66  from CedarBackup2.xmlutil import readFirstChild, readString 
     67  from CedarBackup2.actions.util import findDailyDirs, writeIndicatorFile, getBackupFiles 
     68   
     69   
     70  ######################################################################## 
     71  # Module-wide constants and variables 
     72  ######################################################################## 
     73   
     74  logger = logging.getLogger("CedarBackup2.log.extend.encrypt") 
     75   
     76  GPG_COMMAND = [ "gpg", ] 
     77  VALID_ENCRYPT_MODES = [ "gpg", ] 
     78  ENCRYPT_INDICATOR = "cback.encrypt" 
    
    79 80 81 ######################################################################## 82 # EncryptConfig class definition 83 ######################################################################## 84 85 -class EncryptConfig(object):
    86 87 """ 88 Class representing encrypt configuration. 89 90 Encrypt configuration is used for encrypting staging directories. 91 92 The following restrictions exist on data in this class: 93 94 - The encrypt mode must be one of the values in L{VALID_ENCRYPT_MODES} 95 - The encrypt target value must be a non-empty string 96 97 @sort: __init__, __repr__, __str__, __cmp__, encryptMode, encryptTarget 98 """ 99
    100 - def __init__(self, encryptMode=None, encryptTarget=None):
    101 """ 102 Constructor for the C{EncryptConfig} class. 103 104 @param encryptMode: Encryption mode 105 @param encryptTarget: Encryption target (for instance, GPG recipient) 106 107 @raise ValueError: If one of the values is invalid. 108 """ 109 self._encryptMode = None 110 self._encryptTarget = None 111 self.encryptMode = encryptMode 112 self.encryptTarget = encryptTarget
    113
    114 - def __repr__(self):
    115 """ 116 Official string representation for class instance. 117 """ 118 return "EncryptConfig(%s, %s)" % (self.encryptMode, self.encryptTarget)
    119
    120 - def __str__(self):
    121 """ 122 Informal string representation for class instance. 123 """ 124 return self.__repr__()
    125
    126 - def __cmp__(self, other):
    127 """ 128 Definition of equals operator for this class. 129 Lists within this class are "unordered" for equality comparisons. 130 @param other: Other object to compare to. 131 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 132 """ 133 if other is None: 134 return 1 135 if self.encryptMode != other.encryptMode: 136 if self.encryptMode < other.encryptMode: 137 return -1 138 else: 139 return 1 140 if self.encryptTarget != other.encryptTarget: 141 if self.encryptTarget < other.encryptTarget: 142 return -1 143 else: 144 return 1 145 return 0
    146
    147 - def _setEncryptMode(self, value):
    148 """ 149 Property target used to set the encrypt mode. 150 If not C{None}, the mode must be one of the values in L{VALID_ENCRYPT_MODES}. 151 @raise ValueError: If the value is not valid. 152 """ 153 if value is not None: 154 if value not in VALID_ENCRYPT_MODES: 155 raise ValueError("Encrypt mode must be one of %s." % VALID_ENCRYPT_MODES) 156 self._encryptMode = value
    157
    158 - def _getEncryptMode(self):
    159 """ 160 Property target used to get the encrypt mode. 161 """ 162 return self._encryptMode
    163
    164 - def _setEncryptTarget(self, value):
    165 """ 166 Property target used to set the encrypt target. 167 """ 168 if value is not None: 169 if len(value) < 1: 170 raise ValueError("Encrypt target must be non-empty string.") 171 self._encryptTarget = value
    172
    173 - def _getEncryptTarget(self):
    174 """ 175 Property target used to get the encrypt target. 176 """ 177 return self._encryptTarget
    178 179 encryptMode = property(_getEncryptMode, _setEncryptMode, None, doc="Encrypt mode.") 180 encryptTarget = property(_getEncryptTarget, _setEncryptTarget, None, doc="Encrypt target (i.e. GPG recipient).")
    181
    182 183 ######################################################################## 184 # LocalConfig class definition 185 ######################################################################## 186 187 -class LocalConfig(object):
    188 189 """ 190 Class representing this extension's configuration document. 191 192 This is not a general-purpose configuration object like the main Cedar 193 Backup configuration object. Instead, it just knows how to parse and emit 194 encrypt-specific configuration values. Third parties who need to read and 195 write configuration related to this extension should access it through the 196 constructor, C{validate} and C{addConfig} methods. 197 198 @note: Lists within this class are "unordered" for equality comparisons. 199 200 @sort: __init__, __repr__, __str__, __cmp__, encrypt, validate, addConfig 201 """ 202
    203 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    204 """ 205 Initializes a configuration object. 206 207 If you initialize the object without passing either C{xmlData} or 208 C{xmlPath} then configuration will be empty and will be invalid until it 209 is filled in properly. 210 211 No reference to the original XML data or original path is saved off by 212 this class. Once the data has been parsed (successfully or not) this 213 original information is discarded. 214 215 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 216 method will be called (with its default arguments) against configuration 217 after successfully parsing any passed-in XML. Keep in mind that even if 218 C{validate} is C{False}, it might not be possible to parse the passed-in 219 XML document if lower-level validations fail. 220 221 @note: It is strongly suggested that the C{validate} option always be set 222 to C{True} (the default) unless there is a specific need to read in 223 invalid configuration from disk. 224 225 @param xmlData: XML data representing configuration. 226 @type xmlData: String data. 227 228 @param xmlPath: Path to an XML file on disk. 229 @type xmlPath: Absolute path to a file on disk. 230 231 @param validate: Validate the document after parsing it. 232 @type validate: Boolean true/false. 233 234 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 235 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 236 @raise ValueError: If the parsed configuration document is not valid. 237 """ 238 self._encrypt = None 239 self.encrypt = None 240 if xmlData is not None and xmlPath is not None: 241 raise ValueError("Use either xmlData or xmlPath, but not both.") 242 if xmlData is not None: 243 self._parseXmlData(xmlData) 244 if validate: 245 self.validate() 246 elif xmlPath is not None: 247 xmlData = open(xmlPath).read() 248 self._parseXmlData(xmlData) 249 if validate: 250 self.validate()
    251
    252 - def __repr__(self):
    253 """ 254 Official string representation for class instance. 255 """ 256 return "LocalConfig(%s)" % (self.encrypt)
    257
    258 - def __str__(self):
    259 """ 260 Informal string representation for class instance. 261 """ 262 return self.__repr__()
    263
    264 - def __cmp__(self, other):
    265 """ 266 Definition of equals operator for this class. 267 Lists within this class are "unordered" for equality comparisons. 268 @param other: Other object to compare to. 269 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 270 """ 271 if other is None: 272 return 1 273 if self.encrypt != other.encrypt: 274 if self.encrypt < other.encrypt: 275 return -1 276 else: 277 return 1 278 return 0
    279
    280 - def _setEncrypt(self, value):
    281 """ 282 Property target used to set the encrypt configuration value. 283 If not C{None}, the value must be a C{EncryptConfig} object. 284 @raise ValueError: If the value is not a C{EncryptConfig} 285 """ 286 if value is None: 287 self._encrypt = None 288 else: 289 if not isinstance(value, EncryptConfig): 290 raise ValueError("Value must be a C{EncryptConfig} object.") 291 self._encrypt = value
    292
    293 - def _getEncrypt(self):
    294 """ 295 Property target used to get the encrypt configuration value. 296 """ 297 return self._encrypt
    298 299 encrypt = property(_getEncrypt, _setEncrypt, None, "Encrypt configuration in terms of a C{EncryptConfig} object.") 300
    301 - def validate(self):
    302 """ 303 Validates configuration represented by the object. 304 305 Encrypt configuration must be filled in. Within that, both the encrypt 306 mode and encrypt target must be filled in. 307 308 @raise ValueError: If one of the validations fails. 309 """ 310 if self.encrypt is None: 311 raise ValueError("Encrypt section is required.") 312 if self.encrypt.encryptMode is None: 313 raise ValueError("Encrypt mode must be set.") 314 if self.encrypt.encryptTarget is None: 315 raise ValueError("Encrypt target must be set.")
    316
    317 - def addConfig(self, xmlDom, parentNode):
    318 """ 319 Adds an <encrypt> configuration section as the next child of a parent. 320 321 Third parties should use this function to write configuration related to 322 this extension. 323 324 We add the following fields to the document:: 325 326 encryptMode //cb_config/encrypt/encrypt_mode 327 encryptTarget //cb_config/encrypt/encrypt_target 328 329 @param xmlDom: DOM tree as from C{impl.createDocument()}. 330 @param parentNode: Parent that the section should be appended to. 331 """ 332 if self.encrypt is not None: 333 sectionNode = addContainerNode(xmlDom, parentNode, "encrypt") 334 addStringNode(xmlDom, sectionNode, "encrypt_mode", self.encrypt.encryptMode) 335 addStringNode(xmlDom, sectionNode, "encrypt_target", self.encrypt.encryptTarget)
    336
    337 - def _parseXmlData(self, xmlData):
    338 """ 339 Internal method to parse an XML string into the object. 340 341 This method parses the XML document into a DOM tree (C{xmlDom}) and then 342 calls a static method to parse the encrypt configuration section. 343 344 @param xmlData: XML data to be parsed 345 @type xmlData: String data 346 347 @raise ValueError: If the XML cannot be successfully parsed. 348 """ 349 (xmlDom, parentNode) = createInputDom(xmlData) 350 self._encrypt = LocalConfig._parseEncrypt(parentNode)
    351 352 @staticmethod
    353 - def _parseEncrypt(parent):
    354 """ 355 Parses an encrypt configuration section. 356 357 We read the following individual fields:: 358 359 encryptMode //cb_config/encrypt/encrypt_mode 360 encryptTarget //cb_config/encrypt/encrypt_target 361 362 @param parent: Parent node to search beneath. 363 364 @return: C{EncryptConfig} object or C{None} if the section does not exist. 365 @raise ValueError: If some filled-in value is invalid. 366 """ 367 encrypt = None 368 section = readFirstChild(parent, "encrypt") 369 if section is not None: 370 encrypt = EncryptConfig() 371 encrypt.encryptMode = readString(section, "encrypt_mode") 372 encrypt.encryptTarget = readString(section, "encrypt_target") 373 return encrypt
    374
    375 376 ######################################################################## 377 # Public functions 378 ######################################################################## 379 380 ########################### 381 # executeAction() function 382 ########################### 383 384 -def executeAction(configPath, options, config):
    385 """ 386 Executes the encrypt backup action. 387 388 @param configPath: Path to configuration file on disk. 389 @type configPath: String representing a path on disk. 390 391 @param options: Program command-line options. 392 @type options: Options object. 393 394 @param config: Program configuration. 395 @type config: Config object. 396 397 @raise ValueError: Under many generic error conditions 398 @raise IOError: If there are I/O problems reading or writing files 399 """ 400 logger.debug("Executing encrypt extended action.") 401 if config.options is None or config.stage is None: 402 raise ValueError("Cedar Backup configuration is not properly filled in.") 403 local = LocalConfig(xmlPath=configPath) 404 if local.encrypt.encryptMode not in ["gpg", ]: 405 raise ValueError("Unknown encrypt mode [%s]" % local.encrypt.encryptMode) 406 if local.encrypt.encryptMode == "gpg": 407 _confirmGpgRecipient(local.encrypt.encryptTarget) 408 dailyDirs = findDailyDirs(config.stage.targetDir, ENCRYPT_INDICATOR) 409 for dailyDir in dailyDirs: 410 _encryptDailyDir(dailyDir, local.encrypt.encryptMode, local.encrypt.encryptTarget, 411 config.options.backupUser, config.options.backupGroup) 412 writeIndicatorFile(dailyDir, ENCRYPT_INDICATOR, config.options.backupUser, config.options.backupGroup) 413 logger.info("Executed the encrypt extended action successfully.")
    414
    415 416 ############################## 417 # _encryptDailyDir() function 418 ############################## 419 420 -def _encryptDailyDir(dailyDir, encryptMode, encryptTarget, backupUser, backupGroup):
    421 """ 422 Encrypts the contents of a daily staging directory. 423 424 Indicator files are ignored. All other files are encrypted. The only valid 425 encrypt mode is C{"gpg"}. 426 427 @param dailyDir: Daily directory to encrypt 428 @param encryptMode: Encryption mode (only "gpg" is allowed) 429 @param encryptTarget: Encryption target (GPG recipient for "gpg" mode) 430 @param backupUser: User that target files should be owned by 431 @param backupGroup: Group that target files should be owned by 432 433 @raise ValueError: If the encrypt mode is not supported. 434 @raise ValueError: If the daily staging directory does not exist. 435 """ 436 logger.debug("Begin encrypting contents of [%s].", dailyDir) 437 fileList = getBackupFiles(dailyDir) # ignores indicator files 438 for path in fileList: 439 _encryptFile(path, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=True) 440 logger.debug("Completed encrypting contents of [%s].", dailyDir)
    441
    442 443 ########################## 444 # _encryptFile() function 445 ########################## 446 447 -def _encryptFile(sourcePath, encryptMode, encryptTarget, backupUser, backupGroup, removeSource=False):
    448 """ 449 Encrypts the source file using the indicated mode. 450 451 The encrypted file will be owned by the indicated backup user and group. If 452 C{removeSource} is C{True}, then the source file will be removed after it is 453 successfully encrypted. 454 455 Currently, only the C{"gpg"} encrypt mode is supported. 456 457 @param sourcePath: Absolute path of the source file to encrypt 458 @param encryptMode: Encryption mode (only "gpg" is allowed) 459 @param encryptTarget: Encryption target (GPG recipient) 460 @param backupUser: User that target files should be owned by 461 @param backupGroup: Group that target files should be owned by 462 @param removeSource: Indicates whether to remove the source file 463 464 @return: Path to the newly-created encrypted file. 465 466 @raise ValueError: If an invalid encrypt mode is passed in. 467 @raise IOError: If there is a problem accessing, encrypting or removing the source file. 468 """ 469 if not os.path.exists(sourcePath): 470 raise ValueError("Source path [%s] does not exist." % sourcePath) 471 if encryptMode == 'gpg': 472 encryptedPath = _encryptFileWithGpg(sourcePath, recipient=encryptTarget) 473 else: 474 raise ValueError("Unknown encrypt mode [%s]" % encryptMode) 475 changeOwnership(encryptedPath, backupUser, backupGroup) 476 if removeSource: 477 if os.path.exists(sourcePath): 478 try: 479 os.remove(sourcePath) 480 logger.debug("Completed removing old file [%s].", sourcePath) 481 except: 482 raise IOError("Failed to remove file [%s] after encrypting it." % (sourcePath)) 483 return encryptedPath
    484
    485 486 ################################# 487 # _encryptFileWithGpg() function 488 ################################# 489 490 -def _encryptFileWithGpg(sourcePath, recipient):
    491 """ 492 Encrypts the indicated source file using GPG. 493 494 The encrypted file will be in GPG's binary output format and will have the 495 same name as the source file plus a C{".gpg"} extension. The source file 496 will not be modified or removed by this function call. 497 498 @param sourcePath: Absolute path of file to be encrypted. 499 @param recipient: Recipient name to be passed to GPG's C{"-r"} option 500 501 @return: Path to the newly-created encrypted file. 502 503 @raise IOError: If there is a problem encrypting the file. 504 """ 505 encryptedPath = "%s.gpg" % sourcePath 506 command = resolveCommand(GPG_COMMAND) 507 args = [ "--batch", "--yes", "-e", "-r", recipient, "-o", encryptedPath, sourcePath, ] 508 result = executeCommand(command, args)[0] 509 if result != 0: 510 raise IOError("Error [%d] calling gpg to encrypt [%s]." % (result, sourcePath)) 511 if not os.path.exists(encryptedPath): 512 raise IOError("After call to [%s], encrypted file [%s] does not exist." % (command, encryptedPath)) 513 logger.debug("Completed encrypting file [%s] to [%s].", sourcePath, encryptedPath) 514 return encryptedPath
    515
    516 517 ################################# 518 # _confirmGpgRecpient() function 519 ################################# 520 521 -def _confirmGpgRecipient(recipient):
    522 """ 523 Confirms that a recipient's public key is known to GPG. 524 Throws an exception if there is a problem, or returns normally otherwise. 525 @param recipient: Recipient name 526 @raise IOError: If the recipient's public key is not known to GPG. 527 """ 528 command = resolveCommand(GPG_COMMAND) 529 args = [ "--batch", "-k", recipient, ] # should use --with-colons if the output will be parsed 530 result = executeCommand(command, args)[0] 531 if result != 0: 532 raise IOError("GPG unable to find public key for [%s]." % recipient)
    533

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.testutil-module.html0000664000175000017500000000767612642035643027346 0ustar pronovicpronovic00000000000000 testutil

    Module testutil


    Functions

    availableLocales
    buildPath
    captureOutput
    changeFileAge
    commandAvailable
    extractTar
    failUnlessAssignRaises
    findResources
    getLogin
    getMaskAsMode
    hexFloatLiteralAllowed
    platformCygwin
    platformDebian
    platformHasEcho
    platformMacOsX
    platformRequiresBinaryRead
    platformSupportsLinks
    platformSupportsPermissions
    platformWindows
    randomFilename
    removedir
    runningAsRoot
    setupDebugLogger
    setupOverrides

    Variables

    __package__

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.dvdwriter.MediaDefinition-class.html0000664000175000017500000004303712642035644033255 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter.MediaDefinition
    Package CedarBackup2 :: Package writers :: Module dvdwriter :: Class MediaDefinition
    [hide private]
    [frames] | no frames]

    Class MediaDefinition

    source code

    object --+
             |
            MediaDefinition
    

    Class encapsulating information about DVD media definitions.

    The following media types are accepted:

    • MEDIA_DVDPLUSR: DVD+R media (4.4 GB capacity)
    • MEDIA_DVDPLUSRW: DVD+RW media (4.4 GB capacity)

    Note that the capacity attribute returns capacity in terms of ISO sectors (util.ISO_SECTOR_SIZE). This is for compatibility with the CD writer functionality.

    The capacities are 4.4 GB because Cedar Backup deals in "true" gigabytes of 1024*1024*1024 bytes per gigabyte.

    Instance Methods [hide private]
     
    __init__(self, mediaType)
    Creates a media definition for the indicated media type.
    source code
     
    _setValues(self, mediaType)
    Sets values based on media type.
    source code
     
    _getMediaType(self)
    Property target used to get the media type value.
    source code
     
    _getRewritable(self)
    Property target used to get the rewritable flag value.
    source code
     
    _getCapacity(self)
    Property target used to get the capacity value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]
      mediaType
    Configured media type.
      rewritable
    Boolean indicating whether the media is rewritable.
      capacity
    Total capacity of media in 2048-byte sectors.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, mediaType)
    (Constructor)

    source code 

    Creates a media definition for the indicated media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.
    Overrides: object.__init__

    _setValues(self, mediaType)

    source code 

    Sets values based on media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.

    Property Details [hide private]

    mediaType

    Configured media type.

    Get Method:
    _getMediaType(self) - Property target used to get the media type value.

    rewritable

    Boolean indicating whether the media is rewritable.

    Get Method:
    _getRewritable(self) - Property target used to get the rewritable flag value.

    capacity

    Total capacity of media in 2048-byte sectors.

    Get Method:
    _getCapacity(self) - Property target used to get the capacity value.

    CedarBackup2-2.26.5/doc/interface/epydoc.css0000664000175000017500000003722712642035643022300 0ustar pronovicpronovic00000000000000 /* Epydoc CSS Stylesheet * * This stylesheet can be used to customize the appearance of epydoc's * HTML output. * */ /* Default Colors & Styles * - Set the default foreground & background color with 'body'; and * link colors with 'a:link' and 'a:visited'. * - Use bold for decision list terms. * - The heading styles defined here are used for headings *within* * docstring descriptions. All headings used by epydoc itself use * either class='epydoc' or class='toc' (CSS styles for both * defined below). */ body { background: #ffffff; color: #000000; } p { margin-top: 0.5em; margin-bottom: 0.5em; } a:link { color: #0000ff; } a:visited { color: #204080; } dt { font-weight: bold; } h1 { font-size: +140%; font-style: italic; font-weight: bold; } h2 { font-size: +125%; font-style: italic; font-weight: bold; } h3 { font-size: +110%; font-style: italic; font-weight: normal; } code { font-size: 100%; } /* N.B.: class, not pseudoclass */ a.link { font-family: monospace; } /* Page Header & Footer * - The standard page header consists of a navigation bar (with * pointers to standard pages such as 'home' and 'trees'); a * breadcrumbs list, which can be used to navigate to containing * classes or modules; options links, to show/hide private * variables and to show/hide frames; and a page title (using *

    ). The page title may be followed by a link to the * corresponding source code (using 'span.codelink'). * - The footer consists of a navigation bar, a timestamp, and a * pointer to epydoc's homepage. */ h1.epydoc { margin: 0; font-size: +140%; font-weight: bold; } h2.epydoc { font-size: +130%; font-weight: bold; } h3.epydoc { font-size: +115%; font-weight: bold; margin-top: 0.2em; } td h3.epydoc { font-size: +115%; font-weight: bold; margin-bottom: 0; } table.navbar { background: #a0c0ff; color: #000000; border: 2px groove #c0d0d0; } table.navbar table { color: #000000; } th.navbar-select { background: #70b0ff; color: #000000; } table.navbar a { text-decoration: none; } table.navbar a:link { color: #0000ff; } table.navbar a:visited { color: #204080; } span.breadcrumbs { font-size: 85%; font-weight: bold; } span.options { font-size: 70%; } span.codelink { font-size: 85%; } td.footer { font-size: 85%; } /* Table Headers * - Each summary table and details section begins with a 'header' * row. This row contains a section title (marked by * 'span.table-header') as well as a show/hide private link * (marked by 'span.options', defined above). * - Summary tables that contain user-defined groups mark those * groups using 'group header' rows. */ td.table-header { background: #70b0ff; color: #000000; border: 1px solid #608090; } td.table-header table { color: #000000; } td.table-header table a:link { color: #0000ff; } td.table-header table a:visited { color: #204080; } span.table-header { font-size: 120%; font-weight: bold; } th.group-header { background: #c0e0f8; color: #000000; text-align: left; font-style: italic; font-size: 115%; border: 1px solid #608090; } /* Summary Tables (functions, variables, etc) * - Each object is described by a single row of the table with * two cells. The left cell gives the object's type, and is * marked with 'code.summary-type'. The right cell gives the * object's name and a summary description. * - CSS styles for the table's header and group headers are * defined above, under 'Table Headers' */ table.summary { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; margin-bottom: 0.5em; } td.summary { border: 1px solid #608090; } code.summary-type { font-size: 85%; } table.summary a:link { color: #0000ff; } table.summary a:visited { color: #204080; } /* Details Tables (functions, variables, etc) * - Each object is described in its own div. * - A single-row summary table w/ table-header is used as * a header for each details section (CSS style for table-header * is defined above, under 'Table Headers'). */ table.details { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; margin: .2em 0 0 0; } table.details table { color: #000000; } table.details a:link { color: #0000ff; } table.details a:visited { color: #204080; } /* Fields */ dl.fields { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; } dl.fields dd ul { margin-left: 0em; padding-left: 0em; } dl.fields dd ul li ul { margin-left: 2em; padding-left: 0em; } div.fields { margin-left: 2em; } div.fields p { margin-bottom: 0.5em; } /* Index tables (identifier index, term index, etc) * - link-index is used for indices containing lists of links * (namely, the identifier index & term index). * - index-where is used in link indices for the text indicating * the container/source for each link. * - metadata-index is used for indices containing metadata * extracted from fields (namely, the bug index & todo index). */ table.link-index { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; } td.link-index { border-width: 0px; } table.link-index a:link { color: #0000ff; } table.link-index a:visited { color: #204080; } span.index-where { font-size: 70%; } table.metadata-index { border-collapse: collapse; background: #e8f0f8; color: #000000; border: 1px solid #608090; margin: .2em 0 0 0; } td.metadata-index { border-width: 1px; border-style: solid; } table.metadata-index a:link { color: #0000ff; } table.metadata-index a:visited { color: #204080; } /* Function signatures * - sig* is used for the signature in the details section. * - .summary-sig* is used for the signature in the summary * table, and when listing property accessor functions. * */ .sig-name { color: #006080; } .sig-arg { color: #008060; } .sig-default { color: #602000; } .summary-sig { font-family: monospace; } .summary-sig-name { color: #006080; font-weight: bold; } table.summary a.summary-sig-name:link { color: #006080; font-weight: bold; } table.summary a.summary-sig-name:visited { color: #006080; font-weight: bold; } .summary-sig-arg { color: #006040; } .summary-sig-default { color: #501800; } /* Subclass list */ ul.subclass-list { display: inline; } ul.subclass-list li { display: inline; } /* To render variables, classes etc. like functions */ table.summary .summary-name { color: #006080; font-weight: bold; font-family: monospace; } table.summary a.summary-name:link { color: #006080; font-weight: bold; font-family: monospace; } table.summary a.summary-name:visited { color: #006080; font-weight: bold; font-family: monospace; } /* Variable values * - In the 'variable details' sections, each varaible's value is * listed in a 'pre.variable' box. The width of this box is * restricted to 80 chars; if the value's repr is longer than * this it will be wrapped, using a backslash marked with * class 'variable-linewrap'. If the value's repr is longer * than 3 lines, the rest will be ellided; and an ellipsis * marker ('...' marked with 'variable-ellipsis') will be used. * - If the value is a string, its quote marks will be marked * with 'variable-quote'. * - If the variable is a regexp, it is syntax-highlighted using * the re* CSS classes. */ pre.variable { padding: .5em; margin: 0; background: #dce4ec; color: #000000; border: 1px solid #708890; } .variable-linewrap { color: #604000; font-weight: bold; } .variable-ellipsis { color: #604000; font-weight: bold; } .variable-quote { color: #604000; font-weight: bold; } .variable-group { color: #008000; font-weight: bold; } .variable-op { color: #604000; font-weight: bold; } .variable-string { color: #006030; } .variable-unknown { color: #a00000; font-weight: bold; } .re { color: #000000; } .re-char { color: #006030; } .re-op { color: #600000; } .re-group { color: #003060; } .re-ref { color: #404040; } /* Base tree * - Used by class pages to display the base class hierarchy. */ pre.base-tree { font-size: 80%; margin: 0; } /* Frames-based table of contents headers * - Consists of two frames: one for selecting modules; and * the other listing the contents of the selected module. * - h1.toc is used for each frame's heading * - h2.toc is used for subheadings within each frame. */ h1.toc { text-align: center; font-size: 105%; margin: 0; font-weight: bold; padding: 0; } h2.toc { font-size: 100%; font-weight: bold; margin: 0.5em 0 0 -0.3em; } /* Syntax Highlighting for Source Code * - doctest examples are displayed in a 'pre.py-doctest' block. * If the example is in a details table entry, then it will use * the colors specified by the 'table pre.py-doctest' line. * - Source code listings are displayed in a 'pre.py-src' block. * Each line is marked with 'span.py-line' (used to draw a line * down the left margin, separating the code from the line * numbers). Line numbers are displayed with 'span.py-lineno'. * The expand/collapse block toggle button is displayed with * 'a.py-toggle' (Note: the CSS style for 'a.py-toggle' should not * modify the font size of the text.) * - If a source code page is opened with an anchor, then the * corresponding code block will be highlighted. The code * block's header is highlighted with 'py-highlight-hdr'; and * the code block's body is highlighted with 'py-highlight'. * - The remaining py-* classes are used to perform syntax * highlighting (py-string for string literals, py-name for names, * etc.) */ pre.py-doctest { padding: .5em; margin: 1em; background: #e8f0f8; color: #000000; border: 1px solid #708890; } table pre.py-doctest { background: #dce4ec; color: #000000; } pre.py-src { border: 2px solid #000000; background: #f0f0f0; color: #000000; } .py-line { border-left: 2px solid #000000; margin-left: .2em; padding-left: .4em; } .py-lineno { font-style: italic; font-size: 90%; padding-left: .5em; } a.py-toggle { text-decoration: none; } div.py-highlight-hdr { border-top: 2px solid #000000; border-bottom: 2px solid #000000; background: #d8e8e8; } div.py-highlight { border-bottom: 2px solid #000000; background: #d0e0e0; } .py-prompt { color: #005050; font-weight: bold;} .py-more { color: #005050; font-weight: bold;} .py-string { color: #006030; } .py-comment { color: #003060; } .py-keyword { color: #600000; } .py-output { color: #404040; } .py-name { color: #000050; } .py-name:link { color: #000050 !important; } .py-name:visited { color: #000050 !important; } .py-number { color: #005000; } .py-defname { color: #000060; font-weight: bold; } .py-def-name { color: #000060; font-weight: bold; } .py-base-class { color: #000060; } .py-param { color: #000060; } .py-docstring { color: #006030; } .py-decorator { color: #804020; } /* Use this if you don't want links to names underlined: */ /*a.py-name { text-decoration: none; }*/ /* Graphs & Diagrams * - These CSS styles are used for graphs & diagrams generated using * Graphviz dot. 'img.graph-without-title' is used for bare * diagrams (to remove the border created by making the image * clickable). */ img.graph-without-title { border: none; } img.graph-with-title { border: 1px solid #000000; } span.graph-title { font-weight: bold; } span.graph-caption { } /* General-purpose classes * - 'p.indent-wrapped-lines' defines a paragraph whose first line * is not indented, but whose subsequent lines are. * - The 'nomargin-top' class is used to remove the top margin (e.g. * from lists). The 'nomargin' class is used to remove both the * top and bottom margin (but not the left or right margin -- * for lists, that would cause the bullets to disappear.) */ p.indent-wrapped-lines { padding: 0 0 0 7em; text-indent: -7em; margin: 0; } .nomargin-top { margin-top: 0; } .nomargin { margin-top: 0; margin-bottom: 0; } /* HTML Log */ div.log-block { padding: 0; margin: .5em 0 .5em 0; background: #e8f0f8; color: #000000; border: 1px solid #000000; } div.log-error { padding: .1em .3em .1em .3em; margin: 4px; background: #ffb0b0; color: #000000; border: 1px solid #000000; } div.log-warning { padding: .1em .3em .1em .3em; margin: 4px; background: #ffffb0; color: #000000; border: 1px solid #000000; } div.log-info { padding: .1em .3em .1em .3em; margin: 4px; background: #b0ffb0; color: #000000; border: 1px solid #000000; } h2.log-hdr { background: #70b0ff; color: #000000; margin: 0; padding: 0em 0.5em 0em 0.5em; border-bottom: 1px solid #000000; font-size: 110%; } p.log { font-weight: bold; margin: .5em 0 .5em 0; } tr.opt-changed { color: #000000; font-weight: bold; } tr.opt-default { color: #606060; } pre.log { margin: 0; padding: 0; padding-left: 1em; } CedarBackup2-2.26.5/doc/interface/CedarBackup2.action-module.html0000664000175000017500000001325412642035643026150 0ustar pronovicpronovic00000000000000 CedarBackup2.action
    Package CedarBackup2 :: Module action
    [hide private]
    [frames] | no frames]

    Module action

    source code

    Provides interface backwards compatibility.

    In Cedar Backup 2.10.0, a refactoring effort took place to reorganize the code for the standard actions. The code formerly in action.py was split into various other files in the CedarBackup2.actions package. This mostly-empty file remains to preserve the Cedar Backup library interface.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      __package__ = 'CedarBackup2'
    CedarBackup2-2.26.5/doc/interface/CedarBackup2.cli-pysrc.html0000664000175000017500000244622112642035647025327 0ustar pronovicpronovic00000000000000 CedarBackup2.cli
    Package CedarBackup2 :: Module cli
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.cli

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2007,2010,2015 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 2 (>= 2.7) 
      29  # Project  : Cedar Backup, release 2 
      30  # Purpose  : Provides command-line interface implementation. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides command-line interface implementation for the cback script. 
      40   
      41  Summary 
      42  ======= 
      43   
      44     The functionality in this module encapsulates the command-line interface for 
      45     the cback script.  The cback script itself is very short, basically just an 
      46     invokation of one function implemented here.  That, in turn, makes it 
      47     simpler to validate the command line interface (for instance, it's easier to 
      48     run pychecker against a module, and unit tests are easier, too). 
      49   
      50     The objects and functions implemented in this module are probably not useful 
      51     to any code external to Cedar Backup.   Anyone else implementing their own 
      52     command-line interface would have to reimplement (or at least enhance) all 
      53     of this anyway. 
      54   
      55  Backwards Compatibility 
      56  ======================= 
      57   
      58     The command line interface has changed between Cedar Backup 1.x and Cedar 
      59     Backup 2.x.  Some new switches have been added, and the actions have become 
      60     simple arguments rather than switches (which is a much more standard command 
      61     line format).  Old 1.x command lines are generally no longer valid. 
      62   
      63  @var DEFAULT_CONFIG: The default configuration file. 
      64  @var DEFAULT_LOGFILE: The default log file path. 
      65  @var DEFAULT_OWNERSHIP: Default ownership for the logfile. 
      66  @var DEFAULT_MODE: Default file permissions mode on the logfile. 
      67  @var VALID_ACTIONS: List of valid actions. 
      68  @var COMBINE_ACTIONS: List of actions which can be combined with other actions. 
      69  @var NONCOMBINE_ACTIONS: List of actions which cannot be combined with other actions. 
      70   
      71  @sort: cli, Options, DEFAULT_CONFIG, DEFAULT_LOGFILE, DEFAULT_OWNERSHIP, 
      72         DEFAULT_MODE, VALID_ACTIONS, COMBINE_ACTIONS, NONCOMBINE_ACTIONS 
      73   
      74  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      75  """ 
      76   
      77  ######################################################################## 
      78  # Imported modules 
      79  ######################################################################## 
      80   
      81  # System modules 
      82  import sys 
      83  import os 
      84  import logging 
      85  import getopt 
      86   
      87  # Cedar Backup modules 
      88  from CedarBackup2.release import AUTHOR, EMAIL, VERSION, DATE, COPYRIGHT 
      89  from CedarBackup2.customize import customizeOverrides 
      90  from CedarBackup2.util import DirectedGraph, PathResolverSingleton 
      91  from CedarBackup2.util import sortDict, splitCommandLine, executeCommand, getFunctionReference 
      92  from CedarBackup2.util import getUidGid, encodePath, Diagnostics 
      93  from CedarBackup2.config import Config 
      94  from CedarBackup2.peer import RemotePeer 
      95  from CedarBackup2.actions.collect import executeCollect 
      96  from CedarBackup2.actions.stage import executeStage 
      97  from CedarBackup2.actions.store import executeStore 
      98  from CedarBackup2.actions.purge import executePurge 
      99  from CedarBackup2.actions.rebuild import executeRebuild 
     100  from CedarBackup2.actions.validate import executeValidate 
     101  from CedarBackup2.actions.initialize import executeInitialize 
     102   
     103   
     104  ######################################################################## 
     105  # Module-wide constants and variables 
     106  ######################################################################## 
     107   
     108  logger = logging.getLogger("CedarBackup2.log.cli") 
     109   
     110  DISK_LOG_FORMAT    = "%(asctime)s --> [%(levelname)-7s] %(message)s" 
     111  DISK_OUTPUT_FORMAT = "%(message)s" 
     112  SCREEN_LOG_FORMAT  = "%(message)s" 
     113  SCREEN_LOG_STREAM  = sys.stdout 
     114  DATE_FORMAT        = "%Y-%m-%dT%H:%M:%S %Z" 
     115   
     116  DEFAULT_CONFIG     = "/etc/cback.conf" 
     117  DEFAULT_LOGFILE    = "/var/log/cback.log" 
     118  DEFAULT_OWNERSHIP  = [ "root", "adm", ] 
     119  DEFAULT_MODE       = 0640 
     120   
     121  REBUILD_INDEX      = 0        # can't run with anything else, anyway 
     122  VALIDATE_INDEX     = 0        # can't run with anything else, anyway 
     123  INITIALIZE_INDEX   = 0        # can't run with anything else, anyway 
     124  COLLECT_INDEX      = 100 
     125  STAGE_INDEX        = 200 
     126  STORE_INDEX        = 300 
     127  PURGE_INDEX        = 400 
     128   
     129  VALID_ACTIONS      = [ "collect", "stage", "store", "purge", "rebuild", "validate", "initialize", "all", ] 
     130  COMBINE_ACTIONS    = [ "collect", "stage", "store", "purge", ] 
     131  NONCOMBINE_ACTIONS = [ "rebuild", "validate", "initialize", "all", ] 
     132   
     133  SHORT_SWITCHES     = "hVbqc:fMNl:o:m:OdsD" 
     134  LONG_SWITCHES      = [ 'help', 'version', 'verbose', 'quiet', 
     135                         'config=', 'full', 'managed', 'managed-only', 
     136                         'logfile=', 'owner=', 'mode=', 
     137                         'output', 'debug', 'stack', 'diagnostics', ] 
    
    138 139 140 ####################################################################### 141 # Public functions 142 ####################################################################### 143 144 ################# 145 # cli() function 146 ################# 147 148 -def cli():
    149 """ 150 Implements the command-line interface for the C{cback} script. 151 152 Essentially, this is the "main routine" for the cback script. It does all 153 of the argument processing for the script, and then sets about executing the 154 indicated actions. 155 156 As a general rule, only the actions indicated on the command line will be 157 executed. We will accept any of the built-in actions and any of the 158 configured extended actions (which makes action list verification a two- 159 step process). 160 161 The C{'all'} action has a special meaning: it means that the built-in set of 162 actions (collect, stage, store, purge) will all be executed, in that order. 163 Extended actions will be ignored as part of the C{'all'} action. 164 165 Raised exceptions always result in an immediate return. Otherwise, we 166 generally return when all specified actions have been completed. Actions 167 are ignored if the help, version or validate flags are set. 168 169 A different error code is returned for each type of failure: 170 171 - C{1}: The Python interpreter version is < 2.7 172 - C{2}: Error processing command-line arguments 173 - C{3}: Error configuring logging 174 - C{4}: Error parsing indicated configuration file 175 - C{5}: Backup was interrupted with a CTRL-C or similar 176 - C{6}: Error executing specified backup actions 177 178 @note: This function contains a good amount of logging at the INFO level, 179 because this is the right place to document high-level flow of control (i.e. 180 what the command-line options were, what config file was being used, etc.) 181 182 @note: We assume that anything that I{must} be seen on the screen is logged 183 at the ERROR level. Errors that occur before logging can be configured are 184 written to C{sys.stderr}. 185 186 @return: Error code as described above. 187 """ 188 try: 189 if map(int, [sys.version_info[0], sys.version_info[1]]) < [2, 7]: 190 sys.stderr.write("Python 2 version 2.7 or greater required.\n") 191 return 1 192 except: 193 # sys.version_info isn't available before 2.0 194 sys.stderr.write("Python 2 version 2.7 or greater required.\n") 195 return 1 196 197 try: 198 options = Options(argumentList=sys.argv[1:]) 199 logger.info("Specified command-line actions: %s", options.actions) 200 except Exception, e: 201 _usage() 202 sys.stderr.write(" *** Error: %s\n" % e) 203 return 2 204 205 if options.help: 206 _usage() 207 return 0 208 if options.version: 209 _version() 210 return 0 211 if options.diagnostics: 212 _diagnostics() 213 return 0 214 215 if options.stacktrace: 216 logfile = setupLogging(options) 217 else: 218 try: 219 logfile = setupLogging(options) 220 except Exception as e: 221 sys.stderr.write("Error setting up logging: %s\n" % e) 222 return 3 223 224 logger.info("Cedar Backup run started.") 225 logger.info("Options were [%s]", options) 226 logger.info("Logfile is [%s]", logfile) 227 Diagnostics().logDiagnostics(method=logger.info) 228 229 if options.config is None: 230 logger.debug("Using default configuration file.") 231 configPath = DEFAULT_CONFIG 232 else: 233 logger.debug("Using user-supplied configuration file.") 234 configPath = options.config 235 236 executeLocal = True 237 executeManaged = False 238 if options.managedOnly: 239 executeLocal = False 240 executeManaged = True 241 if options.managed: 242 executeManaged = True 243 logger.debug("Execute local actions: %s", executeLocal) 244 logger.debug("Execute managed actions: %s", executeManaged) 245 246 try: 247 logger.info("Configuration path is [%s]", configPath) 248 config = Config(xmlPath=configPath) 249 customizeOverrides(config) 250 setupPathResolver(config) 251 actionSet = _ActionSet(options.actions, config.extensions, config.options, 252 config.peers, executeManaged, executeLocal) 253 except Exception, e: 254 logger.error("Error reading or handling configuration: %s", e) 255 logger.info("Cedar Backup run completed with status 4.") 256 return 4 257 258 if options.stacktrace: 259 actionSet.executeActions(configPath, options, config) 260 else: 261 try: 262 actionSet.executeActions(configPath, options, config) 263 except KeyboardInterrupt: 264 logger.error("Backup interrupted.") 265 logger.info("Cedar Backup run completed with status 5.") 266 return 5 267 except Exception, e: 268 logger.error("Error executing backup: %s", e) 269 logger.info("Cedar Backup run completed with status 6.") 270 return 6 271 272 logger.info("Cedar Backup run completed with status 0.") 273 return 0
    274
    275 276 ######################################################################## 277 # Action-related class definition 278 ######################################################################## 279 280 #################### 281 # _ActionItem class 282 #################### 283 284 -class _ActionItem(object):
    285 286 """ 287 Class representing a single action to be executed. 288 289 This class represents a single named action to be executed, and understands 290 how to execute that action. 291 292 The built-in actions will use only the options and config values. We also 293 pass in the config path so that extension modules can re-parse configuration 294 if they want to, to add in extra information. 295 296 This class is also where pre-action and post-action hooks are executed. An 297 action item is instantiated in terms of optional pre- and post-action hook 298 objects (config.ActionHook), which are then executed at the appropriate time 299 (if set). 300 301 @note: The comparison operators for this class have been implemented to only 302 compare based on the index and SORT_ORDER value, and ignore all other 303 values. This is so that the action set list can be easily sorted first by 304 type (_ActionItem before _ManagedActionItem) and then by index within type. 305 306 @cvar SORT_ORDER: Defines a sort order to order properly between types. 307 """ 308 309 SORT_ORDER = 0 310
    311 - def __init__(self, index, name, preHooks, postHooks, function):
    312 """ 313 Default constructor. 314 315 It's OK to pass C{None} for C{index}, C{preHooks} or C{postHooks}, but not 316 for C{name}. 317 318 @param index: Index of the item (or C{None}). 319 @param name: Name of the action that is being executed. 320 @param preHooks: List of pre-action hooks in terms of an C{ActionHook} object, or C{None}. 321 @param postHooks: List of post-action hooks in terms of an C{ActionHook} object, or C{None}. 322 @param function: Reference to function associated with item. 323 """ 324 self.index = index 325 self.name = name 326 self.preHooks = preHooks 327 self.postHooks = postHooks 328 self.function = function
    329
    330 - def __cmp__(self, other):
    331 """ 332 Definition of equals operator for this class. 333 The only thing we compare is the item's index. 334 @param other: Other object to compare to. 335 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 336 """ 337 if other is None: 338 return 1 339 if self.index != other.index: 340 if self.index < other.index: 341 return -1 342 else: 343 return 1 344 else: 345 if self.SORT_ORDER != other.SORT_ORDER: 346 if self.SORT_ORDER < other.SORT_ORDER: 347 return -1 348 else: 349 return 1 350 return 0
    351
    352 - def executeAction(self, configPath, options, config):
    353 """ 354 Executes the action associated with an item, including hooks. 355 356 See class notes for more details on how the action is executed. 357 358 @param configPath: Path to configuration file on disk. 359 @param options: Command-line options to be passed to action. 360 @param config: Parsed configuration to be passed to action. 361 362 @raise Exception: If there is a problem executing the action. 363 """ 364 logger.debug("Executing [%s] action.", self.name) 365 if self.preHooks is not None: 366 for hook in self.preHooks: 367 self._executeHook("pre-action", hook) 368 self._executeAction(configPath, options, config) 369 if self.postHooks is not None: 370 for hook in self.postHooks: 371 self._executeHook("post-action", hook)
    372
    373 - def _executeAction(self, configPath, options, config):
    374 """ 375 Executes the action, specifically the function associated with the action. 376 @param configPath: Path to configuration file on disk. 377 @param options: Command-line options to be passed to action. 378 @param config: Parsed configuration to be passed to action. 379 """ 380 name = "%s.%s" % (self.function.__module__, self.function.__name__) 381 logger.debug("Calling action function [%s], execution index [%d]", name, self.index) 382 self.function(configPath, options, config)
    383
    384 - def _executeHook(self, type, hook): # pylint: disable=W0622,R0201
    385 """ 386 Executes a hook command via L{util.executeCommand()}. 387 @param type: String describing the type of hook, for logging. 388 @param hook: Hook, in terms of a C{ActionHook} object. 389 """ 390 fields = splitCommandLine(hook.command) 391 logger.debug("Executing %s hook for action [%s]: %s", type, hook.action, fields[0:1]) 392 result = executeCommand(command=fields[0:1], args=fields[1:])[0] 393 if result != 0: 394 raise IOError("Error (%d) executing %s hook for action [%s]: %s" % (result, type, hook.action, fields[0:1]))
    395
    396 397 ########################### 398 # _ManagedActionItem class 399 ########################### 400 401 -class _ManagedActionItem(object):
    402 403 """ 404 Class representing a single action to be executed on a managed peer. 405 406 This class represents a single named action to be executed, and understands 407 how to execute that action. 408 409 Actions to be executed on a managed peer rely on peer configuration and 410 on the full-backup flag. All other configuration takes place on the remote 411 peer itself. 412 413 @note: The comparison operators for this class have been implemented to only 414 compare based on the index and SORT_ORDER value, and ignore all other 415 values. This is so that the action set list can be easily sorted first by 416 type (_ActionItem before _ManagedActionItem) and then by index within type. 417 418 @cvar SORT_ORDER: Defines a sort order to order properly between types. 419 """ 420 421 SORT_ORDER = 1 422
    423 - def __init__(self, index, name, remotePeers):
    424 """ 425 Default constructor. 426 427 @param index: Index of the item (or C{None}). 428 @param name: Name of the action that is being executed. 429 @param remotePeers: List of remote peers on which to execute the action. 430 """ 431 self.index = index 432 self.name = name 433 self.remotePeers = remotePeers
    434
    435 - def __cmp__(self, other):
    436 """ 437 Definition of equals operator for this class. 438 The only thing we compare is the item's index. 439 @param other: Other object to compare to. 440 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 441 """ 442 if other is None: 443 return 1 444 if self.index != other.index: 445 if self.index < other.index: 446 return -1 447 else: 448 return 1 449 else: 450 if self.SORT_ORDER != other.SORT_ORDER: 451 if self.SORT_ORDER < other.SORT_ORDER: 452 return -1 453 else: 454 return 1 455 return 0
    456
    457 - def executeAction(self, configPath, options, config):
    458 """ 459 Executes the managed action associated with an item. 460 461 @note: Only options.full is actually used. The rest of the arguments 462 exist to satisfy the ActionItem iterface. 463 464 @note: Errors here result in a message logged to ERROR, but no thrown 465 exception. The analogy is the stage action where a problem with one host 466 should not kill the entire backup. Since we're logging an error, the 467 administrator will get an email. 468 469 @param configPath: Path to configuration file on disk. 470 @param options: Command-line options to be passed to action. 471 @param config: Parsed configuration to be passed to action. 472 473 @raise Exception: If there is a problem executing the action. 474 """ 475 for peer in self.remotePeers: 476 logger.debug("Executing managed action [%s] on peer [%s].", self.name, peer.name) 477 try: 478 peer.executeManagedAction(self.name, options.full) 479 except IOError, e: 480 logger.error(e) # log the message and go on, so we don't kill the backup
    481
    482 483 ################### 484 # _ActionSet class 485 ################### 486 487 -class _ActionSet(object):
    488 489 """ 490 Class representing a set of local actions to be executed. 491 492 This class does four different things. First, it ensures that the actions 493 specified on the command-line are sensible. The command-line can only list 494 either built-in actions or extended actions specified in configuration. 495 Also, certain actions (in L{NONCOMBINE_ACTIONS}) cannot be combined with 496 other actions. 497 498 Second, the class enforces an execution order on the specified actions. Any 499 time actions are combined on the command line (either built-in actions or 500 extended actions), we must make sure they get executed in a sensible order. 501 502 Third, the class ensures that any pre-action or post-action hooks are 503 scheduled and executed appropriately. Hooks are configured by building a 504 dictionary mapping between hook action name and command. Pre-action hooks 505 are executed immediately before their associated action, and post-action 506 hooks are executed immediately after their associated action. 507 508 Finally, the class properly interleaves local and managed actions so that 509 the same action gets executed first locally and then on managed peers. 510 511 @sort: __init__, executeActions 512 """ 513
    514 - def __init__(self, actions, extensions, options, peers, managed, local):
    515 """ 516 Constructor for the C{_ActionSet} class. 517 518 This is kind of ugly, because the constructor has to set up a lot of data 519 before being able to do anything useful. The following data structures 520 are initialized based on the input: 521 522 - C{extensionNames}: List of extensions available in configuration 523 - C{preHookMap}: Mapping from action name to list of C{PreActionHook} 524 - C{postHookMap}: Mapping from action name to list of C{PostActionHook} 525 - C{functionMap}: Mapping from action name to Python function 526 - C{indexMap}: Mapping from action name to execution index 527 - C{peerMap}: Mapping from action name to set of C{RemotePeer} 528 - C{actionMap}: Mapping from action name to C{_ActionItem} 529 530 Once these data structures are set up, the command line is validated to 531 make sure only valid actions have been requested, and in a sensible 532 combination. Then, all of the data is used to build C{self.actionSet}, 533 the set action items to be executed by C{executeActions()}. This list 534 might contain either C{_ActionItem} or C{_ManagedActionItem}. 535 536 @param actions: Names of actions specified on the command-line. 537 @param extensions: Extended action configuration (i.e. config.extensions) 538 @param options: Options configuration (i.e. config.options) 539 @param peers: Peers configuration (i.e. config.peers) 540 @param managed: Whether to include managed actions in the set 541 @param local: Whether to include local actions in the set 542 543 @raise ValueError: If one of the specified actions is invalid. 544 """ 545 extensionNames = _ActionSet._deriveExtensionNames(extensions) 546 (preHookMap, postHookMap) = _ActionSet._buildHookMaps(options.hooks) 547 functionMap = _ActionSet._buildFunctionMap(extensions) 548 indexMap = _ActionSet._buildIndexMap(extensions) 549 peerMap = _ActionSet._buildPeerMap(options, peers) 550 actionMap = _ActionSet._buildActionMap(managed, local, extensionNames, functionMap, 551 indexMap, preHookMap, postHookMap, peerMap) 552 _ActionSet._validateActions(actions, extensionNames) 553 self.actionSet = _ActionSet._buildActionSet(actions, actionMap)
    554 555 @staticmethod
    556 - def _deriveExtensionNames(extensions):
    557 """ 558 Builds a list of extended actions that are available in configuration. 559 @param extensions: Extended action configuration (i.e. config.extensions) 560 @return: List of extended action names. 561 """ 562 extensionNames = [] 563 if extensions is not None and extensions.actions is not None: 564 for action in extensions.actions: 565 extensionNames.append(action.name) 566 return extensionNames
    567 568 @staticmethod
    569 - def _buildHookMaps(hooks):
    570 """ 571 Build two mappings from action name to configured C{ActionHook}. 572 @param hooks: List of pre- and post-action hooks (i.e. config.options.hooks) 573 @return: Tuple of (pre hook dictionary, post hook dictionary). 574 """ 575 preHookMap = {} 576 postHookMap = {} 577 if hooks is not None: 578 for hook in hooks: 579 if hook.before: 580 if not hook.action in preHookMap: 581 preHookMap[hook.action] = [] 582 preHookMap[hook.action].append(hook) 583 elif hook.after: 584 if not hook.action in postHookMap: 585 postHookMap[hook.action] = [] 586 postHookMap[hook.action].append(hook) 587 return (preHookMap, postHookMap)
    588 589 @staticmethod
    590 - def _buildFunctionMap(extensions):
    591 """ 592 Builds a mapping from named action to action function. 593 @param extensions: Extended action configuration (i.e. config.extensions) 594 @return: Dictionary mapping action to function. 595 """ 596 functionMap = {} 597 functionMap['rebuild'] = executeRebuild 598 functionMap['validate'] = executeValidate 599 functionMap['initialize'] = executeInitialize 600 functionMap['collect'] = executeCollect 601 functionMap['stage'] = executeStage 602 functionMap['store'] = executeStore 603 functionMap['purge'] = executePurge 604 if extensions is not None and extensions.actions is not None: 605 for action in extensions.actions: 606 functionMap[action.name] = getFunctionReference(action.module, action.function) 607 return functionMap
    608 609 @staticmethod
    610 - def _buildIndexMap(extensions):
    611 """ 612 Builds a mapping from action name to proper execution index. 613 614 If extensions configuration is C{None}, or there are no configured 615 extended actions, the ordering dictionary will only include the built-in 616 actions and their standard indices. 617 618 Otherwise, if the extensions order mode is C{None} or C{"index"}, actions 619 will scheduled by explicit index; and if the extensions order mode is 620 C{"dependency"}, actions will be scheduled using a dependency graph. 621 622 @param extensions: Extended action configuration (i.e. config.extensions) 623 624 @return: Dictionary mapping action name to integer execution index. 625 """ 626 indexMap = {} 627 if extensions is None or extensions.actions is None or extensions.actions == []: 628 logger.info("Action ordering will use 'index' order mode.") 629 indexMap['rebuild'] = REBUILD_INDEX 630 indexMap['validate'] = VALIDATE_INDEX 631 indexMap['initialize'] = INITIALIZE_INDEX 632 indexMap['collect'] = COLLECT_INDEX 633 indexMap['stage'] = STAGE_INDEX 634 indexMap['store'] = STORE_INDEX 635 indexMap['purge'] = PURGE_INDEX 636 logger.debug("Completed filling in action indices for built-in actions.") 637 logger.info("Action order will be: %s", sortDict(indexMap)) 638 else: 639 if extensions.orderMode is None or extensions.orderMode == "index": 640 logger.info("Action ordering will use 'index' order mode.") 641 indexMap['rebuild'] = REBUILD_INDEX 642 indexMap['validate'] = VALIDATE_INDEX 643 indexMap['initialize'] = INITIALIZE_INDEX 644 indexMap['collect'] = COLLECT_INDEX 645 indexMap['stage'] = STAGE_INDEX 646 indexMap['store'] = STORE_INDEX 647 indexMap['purge'] = PURGE_INDEX 648 logger.debug("Completed filling in action indices for built-in actions.") 649 for action in extensions.actions: 650 indexMap[action.name] = action.index 651 logger.debug("Completed filling in action indices for extended actions.") 652 logger.info("Action order will be: %s", sortDict(indexMap)) 653 else: 654 logger.info("Action ordering will use 'dependency' order mode.") 655 graph = DirectedGraph("dependencies") 656 graph.createVertex("rebuild") 657 graph.createVertex("validate") 658 graph.createVertex("initialize") 659 graph.createVertex("collect") 660 graph.createVertex("stage") 661 graph.createVertex("store") 662 graph.createVertex("purge") 663 for action in extensions.actions: 664 graph.createVertex(action.name) 665 graph.createEdge("collect", "stage") # Collect must run before stage, store or purge 666 graph.createEdge("collect", "store") 667 graph.createEdge("collect", "purge") 668 graph.createEdge("stage", "store") # Stage must run before store or purge 669 graph.createEdge("stage", "purge") 670 graph.createEdge("store", "purge") # Store must run before purge 671 for action in extensions.actions: 672 if action.dependencies.beforeList is not None: 673 for vertex in action.dependencies.beforeList: 674 try: 675 graph.createEdge(action.name, vertex) # actions that this action must be run before 676 except ValueError: 677 logger.error("Dependency [%s] on extension [%s] is unknown.", vertex, action.name) 678 raise ValueError("Unable to determine proper action order due to invalid dependency.") 679 if action.dependencies.afterList is not None: 680 for vertex in action.dependencies.afterList: 681 try: 682 graph.createEdge(vertex, action.name) # actions that this action must be run after 683 except ValueError: 684 logger.error("Dependency [%s] on extension [%s] is unknown.", vertex, action.name) 685 raise ValueError("Unable to determine proper action order due to invalid dependency.") 686 try: 687 ordering = graph.topologicalSort() 688 indexMap = dict([(ordering[i], i+1) for i in range(0, len(ordering))]) 689 logger.info("Action order will be: %s", ordering) 690 except ValueError: 691 logger.error("Unable to determine proper action order due to dependency recursion.") 692 logger.error("Extensions configuration is invalid (check for loops).") 693 raise ValueError("Unable to determine proper action order due to dependency recursion.") 694 return indexMap
    695 696 @staticmethod
    697 - def _buildActionMap(managed, local, extensionNames, functionMap, indexMap, preHookMap, postHookMap, peerMap):
    698 """ 699 Builds a mapping from action name to list of action items. 700 701 We build either C{_ActionItem} or C{_ManagedActionItem} objects here. 702 703 In most cases, the mapping from action name to C{_ActionItem} is 1:1. 704 The exception is the "all" action, which is a special case. However, a 705 list is returned in all cases, just for consistency later. Each 706 C{_ActionItem} will be created with a proper function reference and index 707 value for execution ordering. 708 709 The mapping from action name to C{_ManagedActionItem} is always 1:1. 710 Each managed action item contains a list of peers which the action should 711 be executed. 712 713 @param managed: Whether to include managed actions in the set 714 @param local: Whether to include local actions in the set 715 @param extensionNames: List of valid extended action names 716 @param functionMap: Dictionary mapping action name to Python function 717 @param indexMap: Dictionary mapping action name to integer execution index 718 @param preHookMap: Dictionary mapping action name to pre hooks (if any) for the action 719 @param postHookMap: Dictionary mapping action name to post hooks (if any) for the action 720 @param peerMap: Dictionary mapping action name to list of remote peers on which to execute the action 721 722 @return: Dictionary mapping action name to list of C{_ActionItem} objects. 723 """ 724 actionMap = {} 725 for name in extensionNames + VALID_ACTIONS: 726 if name != 'all': # do this one later 727 function = functionMap[name] 728 index = indexMap[name] 729 actionMap[name] = [] 730 if local: 731 (preHooks, postHooks) = _ActionSet._deriveHooks(name, preHookMap, postHookMap) 732 actionMap[name].append(_ActionItem(index, name, preHooks, postHooks, function)) 733 if managed: 734 if name in peerMap: 735 actionMap[name].append(_ManagedActionItem(index, name, peerMap[name])) 736 actionMap['all'] = actionMap['collect'] + actionMap['stage'] + actionMap['store'] + actionMap['purge'] 737 return actionMap
    738 739 @staticmethod
    740 - def _buildPeerMap(options, peers):
    741 """ 742 Build a mapping from action name to list of remote peers. 743 744 There will be one entry in the mapping for each managed action. If there 745 are no managed peers, the mapping will be empty. Only managed actions 746 will be listed in the mapping. 747 748 @param options: Option configuration (i.e. config.options) 749 @param peers: Peers configuration (i.e. config.peers) 750 """ 751 peerMap = {} 752 if peers is not None: 753 if peers.remotePeers is not None: 754 for peer in peers.remotePeers: 755 if peer.managed: 756 remoteUser = _ActionSet._getRemoteUser(options, peer) 757 rshCommand = _ActionSet._getRshCommand(options, peer) 758 cbackCommand = _ActionSet._getCbackCommand(options, peer) 759 managedActions = _ActionSet._getManagedActions(options, peer) 760 remotePeer = RemotePeer(peer.name, None, options.workingDir, remoteUser, None, 761 options.backupUser, rshCommand, cbackCommand) 762 if managedActions is not None: 763 for managedAction in managedActions: 764 if managedAction in peerMap: 765 if remotePeer not in peerMap[managedAction]: 766 peerMap[managedAction].append(remotePeer) 767 else: 768 peerMap[managedAction] = [ remotePeer, ] 769 return peerMap
    770 771 @staticmethod
    772 - def _deriveHooks(action, preHookDict, postHookDict):
    773 """ 774 Derive pre- and post-action hooks, if any, associated with named action. 775 @param action: Name of action to look up 776 @param preHookDict: Dictionary mapping pre-action hooks to action name 777 @param postHookDict: Dictionary mapping post-action hooks to action name 778 @return Tuple (preHooks, postHooks) per mapping, with None values if there is no hook. 779 """ 780 preHooks = None 781 postHooks = None 782 if preHookDict.has_key(action): 783 preHooks = preHookDict[action] 784 if postHookDict.has_key(action): 785 postHooks = postHookDict[action] 786 return (preHooks, postHooks)
    787 788 @staticmethod
    789 - def _validateActions(actions, extensionNames):
    790 """ 791 Validate that the set of specified actions is sensible. 792 793 Any specified action must either be a built-in action or must be among 794 the extended actions defined in configuration. The actions from within 795 L{NONCOMBINE_ACTIONS} may not be combined with other actions. 796 797 @param actions: Names of actions specified on the command-line. 798 @param extensionNames: Names of extensions specified in configuration. 799 800 @raise ValueError: If one or more configured actions are not valid. 801 """ 802 if actions is None or actions == []: 803 raise ValueError("No actions specified.") 804 for action in actions: 805 if action not in VALID_ACTIONS and action not in extensionNames: 806 raise ValueError("Action [%s] is not a valid action or extended action." % action) 807 for action in NONCOMBINE_ACTIONS: 808 if action in actions and actions != [ action, ]: 809 raise ValueError("Action [%s] may not be combined with other actions." % action)
    810 811 @staticmethod
    812 - def _buildActionSet(actions, actionMap):
    813 """ 814 Build set of actions to be executed. 815 816 The set of actions is built in the proper order, so C{executeActions} can 817 spin through the set without thinking about it. Since we've already validated 818 that the set of actions is sensible, we don't take any precautions here to 819 make sure things are combined properly. If the action is listed, it will 820 be "scheduled" for execution. 821 822 @param actions: Names of actions specified on the command-line. 823 @param actionMap: Dictionary mapping action name to C{_ActionItem} object. 824 825 @return: Set of action items in proper order. 826 """ 827 actionSet = [] 828 for action in actions: 829 actionSet.extend(actionMap[action]) 830 actionSet.sort() # sort the actions in order by index 831 return actionSet
    832
    833 - def executeActions(self, configPath, options, config):
    834 """ 835 Executes all actions and extended actions, in the proper order. 836 837 Each action (whether built-in or extension) is executed in an identical 838 manner. The built-in actions will use only the options and config 839 values. We also pass in the config path so that extension modules can 840 re-parse configuration if they want to, to add in extra information. 841 842 @param configPath: Path to configuration file on disk. 843 @param options: Command-line options to be passed to action functions. 844 @param config: Parsed configuration to be passed to action functions. 845 846 @raise Exception: If there is a problem executing the actions. 847 """ 848 logger.debug("Executing local actions.") 849 for actionItem in self.actionSet: 850 actionItem.executeAction(configPath, options, config)
    851 852 @staticmethod
    853 - def _getRemoteUser(options, remotePeer):
    854 """ 855 Gets the remote user associated with a remote peer. 856 Use peer's if possible, otherwise take from options section. 857 @param options: OptionsConfig object, as from config.options 858 @param remotePeer: Configuration-style remote peer object. 859 @return: Name of remote user associated with remote peer. 860 """ 861 if remotePeer.remoteUser is None: 862 return options.backupUser 863 return remotePeer.remoteUser
    864 865 @staticmethod
    866 - def _getRshCommand(options, remotePeer):
    867 """ 868 Gets the RSH command associated with a remote peer. 869 Use peer's if possible, otherwise take from options section. 870 @param options: OptionsConfig object, as from config.options 871 @param remotePeer: Configuration-style remote peer object. 872 @return: RSH command associated with remote peer. 873 """ 874 if remotePeer.rshCommand is None: 875 return options.rshCommand 876 return remotePeer.rshCommand
    877 878 @staticmethod
    879 - def _getCbackCommand(options, remotePeer):
    880 """ 881 Gets the cback command associated with a remote peer. 882 Use peer's if possible, otherwise take from options section. 883 @param options: OptionsConfig object, as from config.options 884 @param remotePeer: Configuration-style remote peer object. 885 @return: cback command associated with remote peer. 886 """ 887 if remotePeer.cbackCommand is None: 888 return options.cbackCommand 889 return remotePeer.cbackCommand
    890 891 @staticmethod
    892 - def _getManagedActions(options, remotePeer):
    893 """ 894 Gets the managed actions list associated with a remote peer. 895 Use peer's if possible, otherwise take from options section. 896 @param options: OptionsConfig object, as from config.options 897 @param remotePeer: Configuration-style remote peer object. 898 @return: Set of managed actions associated with remote peer. 899 """ 900 if remotePeer.managedActions is None: 901 return options.managedActions 902 return remotePeer.managedActions
    903
    904 905 ####################################################################### 906 # Utility functions 907 ####################################################################### 908 909 #################### 910 # _usage() function 911 #################### 912 913 -def _usage(fd=sys.stderr):
    914 """ 915 Prints usage information for the cback script. 916 @param fd: File descriptor used to print information. 917 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 918 """ 919 fd.write("\n") 920 fd.write(" Usage: cback [switches] action(s)\n") 921 fd.write("\n") 922 fd.write(" The following switches are accepted:\n") 923 fd.write("\n") 924 fd.write(" -h, --help Display this usage/help listing\n") 925 fd.write(" -V, --version Display version information\n") 926 fd.write(" -b, --verbose Print verbose output as well as logging to disk\n") 927 fd.write(" -q, --quiet Run quietly (display no output to the screen)\n") 928 fd.write(" -c, --config Path to config file (default: %s)\n" % DEFAULT_CONFIG) 929 fd.write(" -f, --full Perform a full backup, regardless of configuration\n") 930 fd.write(" -M, --managed Include managed clients when executing actions\n") 931 fd.write(" -N, --managed-only Include ONLY managed clients when executing actions\n") 932 fd.write(" -l, --logfile Path to logfile (default: %s)\n" % DEFAULT_LOGFILE) 933 fd.write(" -o, --owner Logfile ownership, user:group (default: %s:%s)\n" % (DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1])) 934 fd.write(" -m, --mode Octal logfile permissions mode (default: %o)\n" % DEFAULT_MODE) 935 fd.write(" -O, --output Record some sub-command (i.e. cdrecord) output to the log\n") 936 fd.write(" -d, --debug Write debugging information to the log (implies --output)\n") 937 fd.write(" -s, --stack Dump a Python stack trace instead of swallowing exceptions\n") # exactly 80 characters in width! 938 fd.write(" -D, --diagnostics Print runtime diagnostics to the screen and exit\n") 939 fd.write("\n") 940 fd.write(" The following actions may be specified:\n") 941 fd.write("\n") 942 fd.write(" all Take all normal actions (collect, stage, store, purge)\n") 943 fd.write(" collect Take the collect action\n") 944 fd.write(" stage Take the stage action\n") 945 fd.write(" store Take the store action\n") 946 fd.write(" purge Take the purge action\n") 947 fd.write(" rebuild Rebuild \"this week's\" disc if possible\n") 948 fd.write(" validate Validate configuration only\n") 949 fd.write(" initialize Initialize media for use with Cedar Backup\n") 950 fd.write("\n") 951 fd.write(" You may also specify extended actions that have been defined in\n") 952 fd.write(" configuration.\n") 953 fd.write("\n") 954 fd.write(" You must specify at least one action to take. More than one of\n") 955 fd.write(" the \"collect\", \"stage\", \"store\" or \"purge\" actions and/or\n") 956 fd.write(" extended actions may be specified in any arbitrary order; they\n") 957 fd.write(" will be executed in a sensible order. The \"all\", \"rebuild\",\n") 958 fd.write(" \"validate\", and \"initialize\" actions may not be combined with\n") 959 fd.write(" other actions.\n") 960 fd.write("\n")
    961
    962 963 ###################### 964 # _version() function 965 ###################### 966 967 -def _version(fd=sys.stdout):
    968 """ 969 Prints version information for the cback script. 970 @param fd: File descriptor used to print information. 971 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 972 """ 973 fd.write("\n") 974 fd.write(" Cedar Backup version %s, released %s.\n" % (VERSION, DATE)) 975 fd.write("\n") 976 fd.write(" Copyright (c) %s %s <%s>.\n" % (COPYRIGHT, AUTHOR, EMAIL)) 977 fd.write(" See CREDITS for a list of included code and other contributors.\n") 978 fd.write(" This is free software; there is NO warranty. See the\n") 979 fd.write(" GNU General Public License version 2 for copying conditions.\n") 980 fd.write("\n") 981 fd.write(" Use the --help option for usage information.\n") 982 fd.write("\n")
    983
    984 985 ########################## 986 # _diagnostics() function 987 ########################## 988 989 -def _diagnostics(fd=sys.stdout):
    990 """ 991 Prints runtime diagnostics information. 992 @param fd: File descriptor used to print information. 993 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 994 """ 995 fd.write("\n") 996 fd.write("Diagnostics:\n") 997 fd.write("\n") 998 Diagnostics().printDiagnostics(fd=fd, prefix=" ") 999 fd.write("\n")
    1000
    1001 1002 ########################## 1003 # setupLogging() function 1004 ########################## 1005 1006 -def setupLogging(options):
    1007 """ 1008 Set up logging based on command-line options. 1009 1010 There are two kinds of logging: flow logging and output logging. Output 1011 logging contains information about system commands executed by Cedar Backup, 1012 for instance the calls to C{mkisofs} or C{mount}, etc. Flow logging 1013 contains error and informational messages used to understand program flow. 1014 Flow log messages and output log messages are written to two different 1015 loggers target (C{CedarBackup2.log} and C{CedarBackup2.output}). Flow log 1016 messages are written at the ERROR, INFO and DEBUG log levels, while output 1017 log messages are generally only written at the INFO log level. 1018 1019 By default, output logging is disabled. When the C{options.output} or 1020 C{options.debug} flags are set, output logging will be written to the 1021 configured logfile. Output logging is never written to the screen. 1022 1023 By default, flow logging is enabled at the ERROR level to the screen and at 1024 the INFO level to the configured logfile. If the C{options.quiet} flag is 1025 set, flow logging is enabled at the INFO level to the configured logfile 1026 only (i.e. no output will be sent to the screen). If the C{options.verbose} 1027 flag is set, flow logging is enabled at the INFO level to both the screen 1028 and the configured logfile. If the C{options.debug} flag is set, flow 1029 logging is enabled at the DEBUG level to both the screen and the configured 1030 logfile. 1031 1032 @param options: Command-line options. 1033 @type options: L{Options} object 1034 1035 @return: Path to logfile on disk. 1036 """ 1037 logfile = _setupLogfile(options) 1038 _setupFlowLogging(logfile, options) 1039 _setupOutputLogging(logfile, options) 1040 return logfile
    1041
    1042 -def _setupLogfile(options):
    1043 """ 1044 Sets up and creates logfile as needed. 1045 1046 If the logfile already exists on disk, it will be left as-is, under the 1047 assumption that it was created with appropriate ownership and permissions. 1048 If the logfile does not exist on disk, it will be created as an empty file. 1049 Ownership and permissions will remain at their defaults unless user/group 1050 and/or mode are set in the options. We ignore errors setting the indicated 1051 user and group. 1052 1053 @note: This function is vulnerable to a race condition. If the log file 1054 does not exist when the function is run, it will attempt to create the file 1055 as safely as possible (using C{O_CREAT}). If two processes attempt to 1056 create the file at the same time, then one of them will fail. In practice, 1057 this shouldn't really be a problem, but it might happen occassionally if two 1058 instances of cback run concurrently or if cback collides with logrotate or 1059 something. 1060 1061 @param options: Command-line options. 1062 1063 @return: Path to logfile on disk. 1064 """ 1065 if options.logfile is None: 1066 logfile = DEFAULT_LOGFILE 1067 else: 1068 logfile = options.logfile 1069 if not os.path.exists(logfile): 1070 mode = DEFAULT_MODE if options.mode is None else options.mode 1071 orig = os.umask(0) # Per os.open(), "When computing mode, the current umask value is first masked out" 1072 try: 1073 fd = os.open(logfile, os.O_RDWR|os.O_CREAT|os.O_APPEND, mode) 1074 with os.fdopen(fd, "a+") as f: 1075 f.write("") 1076 finally: 1077 os.umask(orig) 1078 try: 1079 if options.owner is None or len(options.owner) < 2: 1080 (uid, gid) = getUidGid(DEFAULT_OWNERSHIP[0], DEFAULT_OWNERSHIP[1]) 1081 else: 1082 (uid, gid) = getUidGid(options.owner[0], options.owner[1]) 1083 os.chown(logfile, uid, gid) 1084 except: pass 1085 return logfile
    1086
    1087 -def _setupFlowLogging(logfile, options):
    1088 """ 1089 Sets up flow logging. 1090 @param logfile: Path to logfile on disk. 1091 @param options: Command-line options. 1092 """ 1093 flowLogger = logging.getLogger("CedarBackup2.log") 1094 flowLogger.setLevel(logging.DEBUG) # let the logger see all messages 1095 _setupDiskFlowLogging(flowLogger, logfile, options) 1096 _setupScreenFlowLogging(flowLogger, options)
    1097
    1098 -def _setupOutputLogging(logfile, options):
    1099 """ 1100 Sets up command output logging. 1101 @param logfile: Path to logfile on disk. 1102 @param options: Command-line options. 1103 """ 1104 outputLogger = logging.getLogger("CedarBackup2.output") 1105 outputLogger.setLevel(logging.DEBUG) # let the logger see all messages 1106 _setupDiskOutputLogging(outputLogger, logfile, options)
    1107
    1108 -def _setupDiskFlowLogging(flowLogger, logfile, options):
    1109 """ 1110 Sets up on-disk flow logging. 1111 @param flowLogger: Python flow logger object. 1112 @param logfile: Path to logfile on disk. 1113 @param options: Command-line options. 1114 """ 1115 formatter = logging.Formatter(fmt=DISK_LOG_FORMAT, datefmt=DATE_FORMAT) 1116 handler = logging.FileHandler(logfile, mode="a") 1117 handler.setFormatter(formatter) 1118 if options.debug: 1119 handler.setLevel(logging.DEBUG) 1120 else: 1121 handler.setLevel(logging.INFO) 1122 flowLogger.addHandler(handler)
    1123
    1124 -def _setupScreenFlowLogging(flowLogger, options):
    1125 """ 1126 Sets up on-screen flow logging. 1127 @param flowLogger: Python flow logger object. 1128 @param options: Command-line options. 1129 """ 1130 formatter = logging.Formatter(fmt=SCREEN_LOG_FORMAT) 1131 handler = logging.StreamHandler(SCREEN_LOG_STREAM) 1132 handler.setFormatter(formatter) 1133 if options.quiet: 1134 handler.setLevel(logging.CRITICAL) # effectively turn it off 1135 elif options.verbose: 1136 if options.debug: 1137 handler.setLevel(logging.DEBUG) 1138 else: 1139 handler.setLevel(logging.INFO) 1140 else: 1141 handler.setLevel(logging.ERROR) 1142 flowLogger.addHandler(handler)
    1143
    1144 -def _setupDiskOutputLogging(outputLogger, logfile, options):
    1145 """ 1146 Sets up on-disk command output logging. 1147 @param outputLogger: Python command output logger object. 1148 @param logfile: Path to logfile on disk. 1149 @param options: Command-line options. 1150 """ 1151 formatter = logging.Formatter(fmt=DISK_OUTPUT_FORMAT, datefmt=DATE_FORMAT) 1152 handler = logging.FileHandler(logfile, mode="a") 1153 handler.setFormatter(formatter) 1154 if options.debug or options.output: 1155 handler.setLevel(logging.DEBUG) 1156 else: 1157 handler.setLevel(logging.CRITICAL) # effectively turn it off 1158 outputLogger.addHandler(handler)
    1159
    1160 1161 ############################### 1162 # setupPathResolver() function 1163 ############################### 1164 1165 -def setupPathResolver(config):
    1166 """ 1167 Set up the path resolver singleton based on configuration. 1168 1169 Cedar Backup's path resolver is implemented in terms of a singleton, the 1170 L{PathResolverSingleton} class. This function takes options configuration, 1171 converts it into the dictionary form needed by the singleton, and then 1172 initializes the singleton. After that, any function that needs to resolve 1173 the path of a command can use the singleton. 1174 1175 @param config: Configuration 1176 @type config: L{Config} object 1177 """ 1178 mapping = {} 1179 if config.options.overrides is not None: 1180 for override in config.options.overrides: 1181 mapping[override.command] = override.absolutePath 1182 singleton = PathResolverSingleton() 1183 singleton.fill(mapping)
    1184
    1185 1186 ######################################################################### 1187 # Options class definition 1188 ######################################################################## 1189 1190 -class Options(object):
    1191 1192 ###################### 1193 # Class documentation 1194 ###################### 1195 1196 """ 1197 Class representing command-line options for the cback script. 1198 1199 The C{Options} class is a Python object representation of the command-line 1200 options of the cback script. 1201 1202 The object representation is two-way: a command line string or a list of 1203 command line arguments can be used to create an C{Options} object, and then 1204 changes to the object can be propogated back to a list of command-line 1205 arguments or to a command-line string. An C{Options} object can even be 1206 created from scratch programmatically (if you have a need for that). 1207 1208 There are two main levels of validation in the C{Options} class. The first 1209 is field-level validation. Field-level validation comes into play when a 1210 given field in an object is assigned to or updated. We use Python's 1211 C{property} functionality to enforce specific validations on field values, 1212 and in some places we even use customized list classes to enforce 1213 validations on list members. You should expect to catch a C{ValueError} 1214 exception when making assignments to fields if you are programmatically 1215 filling an object. 1216 1217 The second level of validation is post-completion validation. Certain 1218 validations don't make sense until an object representation of options is 1219 fully "complete". We don't want these validations to apply all of the time, 1220 because it would make building up a valid object from scratch a real pain. 1221 For instance, we might have to do things in the right order to keep from 1222 throwing exceptions, etc. 1223 1224 All of these post-completion validations are encapsulated in the 1225 L{Options.validate} method. This method can be called at any time by a 1226 client, and will always be called immediately after creating a C{Options} 1227 object from a command line and before exporting a C{Options} object back to 1228 a command line. This way, we get acceptable ease-of-use but we also don't 1229 accept or emit invalid command lines. 1230 1231 @note: Lists within this class are "unordered" for equality comparisons. 1232 1233 @sort: __init__, __repr__, __str__, __cmp__ 1234 """ 1235 1236 ############## 1237 # Constructor 1238 ############## 1239
    1240 - def __init__(self, argumentList=None, argumentString=None, validate=True):
    1241 """ 1242 Initializes an options object. 1243 1244 If you initialize the object without passing either C{argumentList} or 1245 C{argumentString}, the object will be empty and will be invalid until it 1246 is filled in properly. 1247 1248 No reference to the original arguments is saved off by this class. Once 1249 the data has been parsed (successfully or not) this original information 1250 is discarded. 1251 1252 The argument list is assumed to be a list of arguments, not including the 1253 name of the command, something like C{sys.argv[1:]}. If you pass 1254 C{sys.argv} instead, things are not going to work. 1255 1256 The argument string will be parsed into an argument list by the 1257 L{util.splitCommandLine} function (see the documentation for that 1258 function for some important notes about its limitations). There is an 1259 assumption that the resulting list will be equivalent to C{sys.argv[1:]}, 1260 just like C{argumentList}. 1261 1262 Unless the C{validate} argument is C{False}, the L{Options.validate} 1263 method will be called (with its default arguments) after successfully 1264 parsing any passed-in command line. This validation ensures that 1265 appropriate actions, etc. have been specified. Keep in mind that even if 1266 C{validate} is C{False}, it might not be possible to parse the passed-in 1267 command line, so an exception might still be raised. 1268 1269 @note: The command line format is specified by the L{_usage} function. 1270 Call L{_usage} to see a usage statement for the cback script. 1271 1272 @note: It is strongly suggested that the C{validate} option always be set 1273 to C{True} (the default) unless there is a specific need to read in 1274 invalid command line arguments. 1275 1276 @param argumentList: Command line for a program. 1277 @type argumentList: List of arguments, i.e. C{sys.argv} 1278 1279 @param argumentString: Command line for a program. 1280 @type argumentString: String, i.e. "cback --verbose stage store" 1281 1282 @param validate: Validate the command line after parsing it. 1283 @type validate: Boolean true/false. 1284 1285 @raise getopt.GetoptError: If the command-line arguments could not be parsed. 1286 @raise ValueError: If the command-line arguments are invalid. 1287 """ 1288 self._help = False 1289 self._version = False 1290 self._verbose = False 1291 self._quiet = False 1292 self._config = None 1293 self._full = False 1294 self._managed = False 1295 self._managedOnly = False 1296 self._logfile = None 1297 self._owner = None 1298 self._mode = None 1299 self._output = False 1300 self._debug = False 1301 self._stacktrace = False 1302 self._diagnostics = False 1303 self._actions = None 1304 self.actions = [] # initialize to an empty list; remainder are OK 1305 if argumentList is not None and argumentString is not None: 1306 raise ValueError("Use either argumentList or argumentString, but not both.") 1307 if argumentString is not None: 1308 argumentList = splitCommandLine(argumentString) 1309 if argumentList is not None: 1310 self._parseArgumentList(argumentList) 1311 if validate: 1312 self.validate()
    1313 1314 1315 ######################### 1316 # String representations 1317 ######################### 1318
    1319 - def __repr__(self):
    1320 """ 1321 Official string representation for class instance. 1322 """ 1323 return self.buildArgumentString(validate=False)
    1324
    1325 - def __str__(self):
    1326 """ 1327 Informal string representation for class instance. 1328 """ 1329 return self.__repr__()
    1330 1331 1332 ############################# 1333 # Standard comparison method 1334 ############################# 1335
    1336 - def __cmp__(self, other):
    1337 """ 1338 Definition of equals operator for this class. 1339 Lists within this class are "unordered" for equality comparisons. 1340 @param other: Other object to compare to. 1341 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 1342 """ 1343 if other is None: 1344 return 1 1345 if self.help != other.help: 1346 if self.help < other.help: 1347 return -1 1348 else: 1349 return 1 1350 if self.version != other.version: 1351 if self.version < other.version: 1352 return -1 1353 else: 1354 return 1 1355 if self.verbose != other.verbose: 1356 if self.verbose < other.verbose: 1357 return -1 1358 else: 1359 return 1 1360 if self.quiet != other.quiet: 1361 if self.quiet < other.quiet: 1362 return -1 1363 else: 1364 return 1 1365 if self.config != other.config: 1366 if self.config < other.config: 1367 return -1 1368 else: 1369 return 1 1370 if self.full != other.full: 1371 if self.full < other.full: 1372 return -1 1373 else: 1374 return 1 1375 if self.managed != other.managed: 1376 if self.managed < other.managed: 1377 return -1 1378 else: 1379 return 1 1380 if self.managedOnly != other.managedOnly: 1381 if self.managedOnly < other.managedOnly: 1382 return -1 1383 else: 1384 return 1 1385 if self.logfile != other.logfile: 1386 if self.logfile < other.logfile: 1387 return -1 1388 else: 1389 return 1 1390 if self.owner != other.owner: 1391 if self.owner < other.owner: 1392 return -1 1393 else: 1394 return 1 1395 if self.mode != other.mode: 1396 if self.mode < other.mode: 1397 return -1 1398 else: 1399 return 1 1400 if self.output != other.output: 1401 if self.output < other.output: 1402 return -1 1403 else: 1404 return 1 1405 if self.debug != other.debug: 1406 if self.debug < other.debug: 1407 return -1 1408 else: 1409 return 1 1410 if self.stacktrace != other.stacktrace: 1411 if self.stacktrace < other.stacktrace: 1412 return -1 1413 else: 1414 return 1 1415 if self.diagnostics != other.diagnostics: 1416 if self.diagnostics < other.diagnostics: 1417 return -1 1418 else: 1419 return 1 1420 if self.actions != other.actions: 1421 if self.actions < other.actions: 1422 return -1 1423 else: 1424 return 1 1425 return 0
    1426 1427 1428 ############# 1429 # Properties 1430 ############# 1431
    1432 - def _setHelp(self, value):
    1433 """ 1434 Property target used to set the help flag. 1435 No validations, but we normalize the value to C{True} or C{False}. 1436 """ 1437 if value: 1438 self._help = True 1439 else: 1440 self._help = False
    1441
    1442 - def _getHelp(self):
    1443 """ 1444 Property target used to get the help flag. 1445 """ 1446 return self._help
    1447
    1448 - def _setVersion(self, value):
    1449 """ 1450 Property target used to set the version flag. 1451 No validations, but we normalize the value to C{True} or C{False}. 1452 """ 1453 if value: 1454 self._version = True 1455 else: 1456 self._version = False
    1457
    1458 - def _getVersion(self):
    1459 """ 1460 Property target used to get the version flag. 1461 """ 1462 return self._version
    1463
    1464 - def _setVerbose(self, value):
    1465 """ 1466 Property target used to set the verbose flag. 1467 No validations, but we normalize the value to C{True} or C{False}. 1468 """ 1469 if value: 1470 self._verbose = True 1471 else: 1472 self._verbose = False
    1473
    1474 - def _getVerbose(self):
    1475 """ 1476 Property target used to get the verbose flag. 1477 """ 1478 return self._verbose
    1479
    1480 - def _setQuiet(self, value):
    1481 """ 1482 Property target used to set the quiet flag. 1483 No validations, but we normalize the value to C{True} or C{False}. 1484 """ 1485 if value: 1486 self._quiet = True 1487 else: 1488 self._quiet = False
    1489
    1490 - def _getQuiet(self):
    1491 """ 1492 Property target used to get the quiet flag. 1493 """ 1494 return self._quiet
    1495
    1496 - def _setConfig(self, value):
    1497 """ 1498 Property target used to set the config parameter. 1499 """ 1500 if value is not None: 1501 if len(value) < 1: 1502 raise ValueError("The config parameter must be a non-empty string.") 1503 self._config = value
    1504
    1505 - def _getConfig(self):
    1506 """ 1507 Property target used to get the config parameter. 1508 """ 1509 return self._config
    1510
    1511 - def _setFull(self, value):
    1512 """ 1513 Property target used to set the full flag. 1514 No validations, but we normalize the value to C{True} or C{False}. 1515 """ 1516 if value: 1517 self._full = True 1518 else: 1519 self._full = False
    1520
    1521 - def _getFull(self):
    1522 """ 1523 Property target used to get the full flag. 1524 """ 1525 return self._full
    1526
    1527 - def _setManaged(self, value):
    1528 """ 1529 Property target used to set the managed flag. 1530 No validations, but we normalize the value to C{True} or C{False}. 1531 """ 1532 if value: 1533 self._managed = True 1534 else: 1535 self._managed = False
    1536
    1537 - def _getManaged(self):
    1538 """ 1539 Property target used to get the managed flag. 1540 """ 1541 return self._managed
    1542
    1543 - def _setManagedOnly(self, value):
    1544 """ 1545 Property target used to set the managedOnly flag. 1546 No validations, but we normalize the value to C{True} or C{False}. 1547 """ 1548 if value: 1549 self._managedOnly = True 1550 else: 1551 self._managedOnly = False
    1552
    1553 - def _getManagedOnly(self):
    1554 """ 1555 Property target used to get the managedOnly flag. 1556 """ 1557 return self._managedOnly
    1558
    1559 - def _setLogfile(self, value):
    1560 """ 1561 Property target used to set the logfile parameter. 1562 @raise ValueError: If the value cannot be encoded properly. 1563 """ 1564 if value is not None: 1565 if len(value) < 1: 1566 raise ValueError("The logfile parameter must be a non-empty string.") 1567 self._logfile = encodePath(value)
    1568
    1569 - def _getLogfile(self):
    1570 """ 1571 Property target used to get the logfile parameter. 1572 """ 1573 return self._logfile
    1574
    1575 - def _setOwner(self, value):
    1576 """ 1577 Property target used to set the owner parameter. 1578 If not C{None}, the owner must be a C{(user,group)} tuple or list. 1579 Strings (and inherited children of strings) are explicitly disallowed. 1580 The value will be normalized to a tuple. 1581 @raise ValueError: If the value is not valid. 1582 """ 1583 if value is None: 1584 self._owner = None 1585 else: 1586 if isinstance(value, str): 1587 raise ValueError("Must specify user and group tuple for owner parameter.") 1588 if len(value) != 2: 1589 raise ValueError("Must specify user and group tuple for owner parameter.") 1590 if len(value[0]) < 1 or len(value[1]) < 1: 1591 raise ValueError("User and group tuple values must be non-empty strings.") 1592 self._owner = (value[0], value[1])
    1593
    1594 - def _getOwner(self):
    1595 """ 1596 Property target used to get the owner parameter. 1597 The parameter is a tuple of C{(user, group)}. 1598 """ 1599 return self._owner
    1600
    1601 - def _setMode(self, value):
    1602 """ 1603 Property target used to set the mode parameter. 1604 """ 1605 if value is None: 1606 self._mode = None 1607 else: 1608 try: 1609 if isinstance(value, str): 1610 value = int(value, 8) 1611 else: 1612 value = int(value) 1613 except TypeError: 1614 raise ValueError("Mode must be an octal integer >= 0, i.e. 644.") 1615 if value < 0: 1616 raise ValueError("Mode must be an octal integer >= 0. i.e. 644.") 1617 self._mode = value
    1618
    1619 - def _getMode(self):
    1620 """ 1621 Property target used to get the mode parameter. 1622 """ 1623 return self._mode
    1624
    1625 - def _setOutput(self, value):
    1626 """ 1627 Property target used to set the output flag. 1628 No validations, but we normalize the value to C{True} or C{False}. 1629 """ 1630 if value: 1631 self._output = True 1632 else: 1633 self._output = False
    1634
    1635 - def _getOutput(self):
    1636 """ 1637 Property target used to get the output flag. 1638 """ 1639 return self._output
    1640
    1641 - def _setDebug(self, value):
    1642 """ 1643 Property target used to set the debug flag. 1644 No validations, but we normalize the value to C{True} or C{False}. 1645 """ 1646 if value: 1647 self._debug = True 1648 else: 1649 self._debug = False
    1650
    1651 - def _getDebug(self):
    1652 """ 1653 Property target used to get the debug flag. 1654 """ 1655 return self._debug
    1656
    1657 - def _setStacktrace(self, value):
    1658 """ 1659 Property target used to set the stacktrace flag. 1660 No validations, but we normalize the value to C{True} or C{False}. 1661 """ 1662 if value: 1663 self._stacktrace = True 1664 else: 1665 self._stacktrace = False
    1666
    1667 - def _getStacktrace(self):
    1668 """ 1669 Property target used to get the stacktrace flag. 1670 """ 1671 return self._stacktrace
    1672
    1673 - def _setDiagnostics(self, value):
    1674 """ 1675 Property target used to set the diagnostics flag. 1676 No validations, but we normalize the value to C{True} or C{False}. 1677 """ 1678 if value: 1679 self._diagnostics = True 1680 else: 1681 self._diagnostics = False
    1682
    1683 - def _getDiagnostics(self):
    1684 """ 1685 Property target used to get the diagnostics flag. 1686 """ 1687 return self._diagnostics
    1688
    1689 - def _setActions(self, value):
    1690 """ 1691 Property target used to set the actions list. 1692 We don't restrict the contents of actions. They're validated somewhere else. 1693 @raise ValueError: If the value is not valid. 1694 """ 1695 if value is None: 1696 self._actions = None 1697 else: 1698 try: 1699 saved = self._actions 1700 self._actions = [] 1701 self._actions.extend(value) 1702 except Exception, e: 1703 self._actions = saved 1704 raise e
    1705
    1706 - def _getActions(self):
    1707 """ 1708 Property target used to get the actions list. 1709 """ 1710 return self._actions
    1711 1712 help = property(_getHelp, _setHelp, None, "Command-line help (C{-h,--help}) flag.") 1713 version = property(_getVersion, _setVersion, None, "Command-line version (C{-V,--version}) flag.") 1714 verbose = property(_getVerbose, _setVerbose, None, "Command-line verbose (C{-b,--verbose}) flag.") 1715 quiet = property(_getQuiet, _setQuiet, None, "Command-line quiet (C{-q,--quiet}) flag.") 1716 config = property(_getConfig, _setConfig, None, "Command-line configuration file (C{-c,--config}) parameter.") 1717 full = property(_getFull, _setFull, None, "Command-line full-backup (C{-f,--full}) flag.") 1718 managed = property(_getManaged, _setManaged, None, "Command-line managed (C{-M,--managed}) flag.") 1719 managedOnly = property(_getManagedOnly, _setManagedOnly, None, "Command-line managed-only (C{-N,--managed-only}) flag.") 1720 logfile = property(_getLogfile, _setLogfile, None, "Command-line logfile (C{-l,--logfile}) parameter.") 1721 owner = property(_getOwner, _setOwner, None, "Command-line owner (C{-o,--owner}) parameter, as tuple C{(user,group)}.") 1722 mode = property(_getMode, _setMode, None, "Command-line mode (C{-m,--mode}) parameter.") 1723 output = property(_getOutput, _setOutput, None, "Command-line output (C{-O,--output}) flag.") 1724 debug = property(_getDebug, _setDebug, None, "Command-line debug (C{-d,--debug}) flag.") 1725 stacktrace = property(_getStacktrace, _setStacktrace, None, "Command-line stacktrace (C{-s,--stack}) flag.") 1726 diagnostics = property(_getDiagnostics, _setDiagnostics, None, "Command-line diagnostics (C{-D,--diagnostics}) flag.") 1727 actions = property(_getActions, _setActions, None, "Command-line actions list.") 1728 1729 1730 ################## 1731 # Utility methods 1732 ################## 1733
    1734 - def validate(self):
    1735 """ 1736 Validates command-line options represented by the object. 1737 1738 Unless C{--help} or C{--version} are supplied, at least one action must 1739 be specified. Other validations (as for allowed values for particular 1740 options) will be taken care of at assignment time by the properties 1741 functionality. 1742 1743 @note: The command line format is specified by the L{_usage} function. 1744 Call L{_usage} to see a usage statement for the cback script. 1745 1746 @raise ValueError: If one of the validations fails. 1747 """ 1748 if not self.help and not self.version and not self.diagnostics: 1749 if self.actions is None or len(self.actions) == 0: 1750 raise ValueError("At least one action must be specified.") 1751 if self.managed and self.managedOnly: 1752 raise ValueError("The --managed and --managed-only options may not be combined.")
    1753
    1754 - def buildArgumentList(self, validate=True):
    1755 """ 1756 Extracts options into a list of command line arguments. 1757 1758 The original order of the various arguments (if, indeed, the object was 1759 initialized with a command-line) is not preserved in this generated 1760 argument list. Besides that, the argument list is normalized to use the 1761 long option names (i.e. --version rather than -V). The resulting list 1762 will be suitable for passing back to the constructor in the 1763 C{argumentList} parameter. Unlike L{buildArgumentString}, string 1764 arguments are not quoted here, because there is no need for it. 1765 1766 Unless the C{validate} parameter is C{False}, the L{Options.validate} 1767 method will be called (with its default arguments) against the 1768 options before extracting the command line. If the options are not valid, 1769 then an argument list will not be extracted. 1770 1771 @note: It is strongly suggested that the C{validate} option always be set 1772 to C{True} (the default) unless there is a specific need to extract an 1773 invalid command line. 1774 1775 @param validate: Validate the options before extracting the command line. 1776 @type validate: Boolean true/false. 1777 1778 @return: List representation of command-line arguments. 1779 @raise ValueError: If options within the object are invalid. 1780 """ 1781 if validate: 1782 self.validate() 1783 argumentList = [] 1784 if self._help: 1785 argumentList.append("--help") 1786 if self.version: 1787 argumentList.append("--version") 1788 if self.verbose: 1789 argumentList.append("--verbose") 1790 if self.quiet: 1791 argumentList.append("--quiet") 1792 if self.config is not None: 1793 argumentList.append("--config") 1794 argumentList.append(self.config) 1795 if self.full: 1796 argumentList.append("--full") 1797 if self.managed: 1798 argumentList.append("--managed") 1799 if self.managedOnly: 1800 argumentList.append("--managed-only") 1801 if self.logfile is not None: 1802 argumentList.append("--logfile") 1803 argumentList.append(self.logfile) 1804 if self.owner is not None: 1805 argumentList.append("--owner") 1806 argumentList.append("%s:%s" % (self.owner[0], self.owner[1])) 1807 if self.mode is not None: 1808 argumentList.append("--mode") 1809 argumentList.append("%o" % self.mode) 1810 if self.output: 1811 argumentList.append("--output") 1812 if self.debug: 1813 argumentList.append("--debug") 1814 if self.stacktrace: 1815 argumentList.append("--stack") 1816 if self.diagnostics: 1817 argumentList.append("--diagnostics") 1818 if self.actions is not None: 1819 for action in self.actions: 1820 argumentList.append(action) 1821 return argumentList
    1822
    1823 - def buildArgumentString(self, validate=True):
    1824 """ 1825 Extracts options into a string of command-line arguments. 1826 1827 The original order of the various arguments (if, indeed, the object was 1828 initialized with a command-line) is not preserved in this generated 1829 argument string. Besides that, the argument string is normalized to use 1830 the long option names (i.e. --version rather than -V) and to quote all 1831 string arguments with double quotes (C{"}). The resulting string will be 1832 suitable for passing back to the constructor in the C{argumentString} 1833 parameter. 1834 1835 Unless the C{validate} parameter is C{False}, the L{Options.validate} 1836 method will be called (with its default arguments) against the options 1837 before extracting the command line. If the options are not valid, then 1838 an argument string will not be extracted. 1839 1840 @note: It is strongly suggested that the C{validate} option always be set 1841 to C{True} (the default) unless there is a specific need to extract an 1842 invalid command line. 1843 1844 @param validate: Validate the options before extracting the command line. 1845 @type validate: Boolean true/false. 1846 1847 @return: String representation of command-line arguments. 1848 @raise ValueError: If options within the object are invalid. 1849 """ 1850 if validate: 1851 self.validate() 1852 argumentString = "" 1853 if self._help: 1854 argumentString += "--help " 1855 if self.version: 1856 argumentString += "--version " 1857 if self.verbose: 1858 argumentString += "--verbose " 1859 if self.quiet: 1860 argumentString += "--quiet " 1861 if self.config is not None: 1862 argumentString += "--config \"%s\" " % self.config 1863 if self.full: 1864 argumentString += "--full " 1865 if self.managed: 1866 argumentString += "--managed " 1867 if self.managedOnly: 1868 argumentString += "--managed-only " 1869 if self.logfile is not None: 1870 argumentString += "--logfile \"%s\" " % self.logfile 1871 if self.owner is not None: 1872 argumentString += "--owner \"%s:%s\" " % (self.owner[0], self.owner[1]) 1873 if self.mode is not None: 1874 argumentString += "--mode %o " % self.mode 1875 if self.output: 1876 argumentString += "--output " 1877 if self.debug: 1878 argumentString += "--debug " 1879 if self.stacktrace: 1880 argumentString += "--stack " 1881 if self.diagnostics: 1882 argumentString += "--diagnostics " 1883 if self.actions is not None: 1884 for action in self.actions: 1885 argumentString += "\"%s\" " % action 1886 return argumentString
    1887
    1888 - def _parseArgumentList(self, argumentList):
    1889 """ 1890 Internal method to parse a list of command-line arguments. 1891 1892 Most of the validation we do here has to do with whether the arguments 1893 can be parsed and whether any values which exist are valid. We don't do 1894 any validation as to whether required elements exist or whether elements 1895 exist in the proper combination (instead, that's the job of the 1896 L{validate} method). 1897 1898 For any of the options which supply parameters, if the option is 1899 duplicated with long and short switches (i.e. C{-l} and a C{--logfile}) 1900 then the long switch is used. If the same option is duplicated with the 1901 same switch (long or short), then the last entry on the command line is 1902 used. 1903 1904 @param argumentList: List of arguments to a command. 1905 @type argumentList: List of arguments to a command, i.e. C{sys.argv[1:]} 1906 1907 @raise ValueError: If the argument list cannot be successfully parsed. 1908 """ 1909 switches = { } 1910 opts, self.actions = getopt.getopt(argumentList, SHORT_SWITCHES, LONG_SWITCHES) 1911 for o, a in opts: # push the switches into a hash 1912 switches[o] = a 1913 if switches.has_key("-h") or switches.has_key("--help"): 1914 self.help = True 1915 if switches.has_key("-V") or switches.has_key("--version"): 1916 self.version = True 1917 if switches.has_key("-b") or switches.has_key("--verbose"): 1918 self.verbose = True 1919 if switches.has_key("-q") or switches.has_key("--quiet"): 1920 self.quiet = True 1921 if switches.has_key("-c"): 1922 self.config = switches["-c"] 1923 if switches.has_key("--config"): 1924 self.config = switches["--config"] 1925 if switches.has_key("-f") or switches.has_key("--full"): 1926 self.full = True 1927 if switches.has_key("-M") or switches.has_key("--managed"): 1928 self.managed = True 1929 if switches.has_key("-N") or switches.has_key("--managed-only"): 1930 self.managedOnly = True 1931 if switches.has_key("-l"): 1932 self.logfile = switches["-l"] 1933 if switches.has_key("--logfile"): 1934 self.logfile = switches["--logfile"] 1935 if switches.has_key("-o"): 1936 self.owner = switches["-o"].split(":", 1) 1937 if switches.has_key("--owner"): 1938 self.owner = switches["--owner"].split(":", 1) 1939 if switches.has_key("-m"): 1940 self.mode = switches["-m"] 1941 if switches.has_key("--mode"): 1942 self.mode = switches["--mode"] 1943 if switches.has_key("-O") or switches.has_key("--output"): 1944 self.output = True 1945 if switches.has_key("-d") or switches.has_key("--debug"): 1946 self.debug = True 1947 if switches.has_key("-s") or switches.has_key("--stack"): 1948 self.stacktrace = True 1949 if switches.has_key("-D") or switches.has_key("--diagnostics"): 1950 self.diagnostics = True
    1951 1952 1953 ######################################################################### 1954 # Main routine 1955 ######################################################################## 1956 1957 if __name__ == "__main__": 1958 result = cli() 1959 sys.exit(result) 1960

    CedarBackup2-2.26.5/doc/interface/help.html0000664000175000017500000002603312642035643022112 0ustar pronovicpronovic00000000000000 Help
     
    [hide private]
    [frames] | no frames]

    API Documentation

    This document contains the API (Application Programming Interface) documentation for CedarBackup2. Documentation for the Python objects defined by the project is divided into separate pages for each package, module, and class. The API documentation also includes two pages containing information about the project as a whole: a trees page, and an index page.

    Object Documentation

    Each Package Documentation page contains:

    • A description of the package.
    • A list of the modules and sub-packages contained by the package.
    • A summary of the classes defined by the package.
    • A summary of the functions defined by the package.
    • A summary of the variables defined by the package.
    • A detailed description of each function defined by the package.
    • A detailed description of each variable defined by the package.

    Each Module Documentation page contains:

    • A description of the module.
    • A summary of the classes defined by the module.
    • A summary of the functions defined by the module.
    • A summary of the variables defined by the module.
    • A detailed description of each function defined by the module.
    • A detailed description of each variable defined by the module.

    Each Class Documentation page contains:

    • A class inheritance diagram.
    • A list of known subclasses.
    • A description of the class.
    • A summary of the methods defined by the class.
    • A summary of the instance variables defined by the class.
    • A summary of the class (static) variables defined by the class.
    • A detailed description of each method defined by the class.
    • A detailed description of each instance variable defined by the class.
    • A detailed description of each class (static) variable defined by the class.

    Project Documentation

    The Trees page contains the module and class hierarchies:

    • The module hierarchy lists every package and module, with modules grouped into packages. At the top level, and within each package, modules and sub-packages are listed alphabetically.
    • The class hierarchy lists every class, grouped by base class. If a class has more than one base class, then it will be listed under each base class. At the top level, and under each base class, classes are listed alphabetically.

    The Index page contains indices of terms and identifiers:

    • The term index lists every term indexed by any object's documentation. For each term, the index provides links to each place where the term is indexed.
    • The identifier index lists the (short) name of every package, module, class, method, function, variable, and parameter. For each identifier, the index provides a short description, and a link to its documentation.

    The Table of Contents

    The table of contents occupies the two frames on the left side of the window. The upper-left frame displays the project contents, and the lower-left frame displays the module contents:

    Project
    Contents
    ...
    API
    Documentation
    Frame


    Module
    Contents
     
    ...
     

    The project contents frame contains a list of all packages and modules that are defined by the project. Clicking on an entry will display its contents in the module contents frame. Clicking on a special entry, labeled "Everything," will display the contents of the entire project.

    The module contents frame contains a list of every submodule, class, type, exception, function, and variable defined by a module or package. Clicking on an entry will display its documentation in the API documentation frame. Clicking on the name of the module, at the top of the frame, will display the documentation for the module itself.

    The "frames" and "no frames" buttons below the top navigation bar can be used to control whether the table of contents is displayed or not.

    The Navigation Bar

    A navigation bar is located at the top and bottom of every page. It indicates what type of page you are currently viewing, and allows you to go to related pages. The following table describes the labels on the navigation bar. Note that not some labels (such as [Parent]) are not displayed on all pages.

    Label Highlighted when... Links to...
    [Parent] (never highlighted) the parent of the current package
    [Package] viewing a package the package containing the current object
    [Module] viewing a module the module containing the current object
    [Class] viewing a class the class containing the current object
    [Trees] viewing the trees page the trees page
    [Index] viewing the index page the index page
    [Help] viewing the help page the help page

    The "show private" and "hide private" buttons below the top navigation bar can be used to control whether documentation for private objects is displayed. Private objects are usually defined as objects whose (short) names begin with a single underscore, but do not end with an underscore. For example, "_x", "__pprint", and "epydoc.epytext._tokenize" are private objects; but "re.sub", "__init__", and "type_" are not. However, if a module defines the "__all__" variable, then its contents are used to decide which objects are private.

    A timestamp below the bottom navigation bar indicates when each page was last updated.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend.subversion-module.html0000664000175000017500000001077212642035643031145 0ustar pronovicpronovic00000000000000 subversion

    Module subversion


    Classes

    BDBRepository
    FSFSRepository
    LocalConfig
    Repository
    RepositoryDir
    SubversionConfig

    Functions

    backupBDBRepository
    backupFSFSRepository
    backupRepository
    executeAction
    getYoungestRevision

    Variables

    REVISION_PATH_EXTENSION
    SVNADMIN_COMMAND
    SVNLOOK_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.util-pysrc.html0000664000175000017500000203333112642035647025527 0ustar pronovicpronovic00000000000000 CedarBackup2.util
    Package CedarBackup2 :: Module util
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.util

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # Portions copyright (c) 2001, 2002 Python Software Foundation. 
      15  # All Rights Reserved. 
      16  # 
      17  # This program is free software; you can redistribute it and/or 
      18  # modify it under the terms of the GNU General Public License, 
      19  # Version 2, as published by the Free Software Foundation. 
      20  # 
      21  # This program is distributed in the hope that it will be useful, 
      22  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      23  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      24  # 
      25  # Copies of the GNU General Public License are available from 
      26  # the Free Software Foundation website, http://www.gnu.org/. 
      27  # 
      28  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      29  # 
      30  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      31  # Language : Python 2 (>= 2.7) 
      32  # Project  : Cedar Backup, release 2 
      33  # Purpose  : Provides general-purpose utilities. 
      34  # 
      35  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      36   
      37  ######################################################################## 
      38  # Module documentation 
      39  ######################################################################## 
      40   
      41  """ 
      42  Provides general-purpose utilities. 
      43   
      44  @sort: AbsolutePathList, ObjectTypeList, RestrictedContentList, RegexMatchList, 
      45         RegexList, _Vertex, DirectedGraph, PathResolverSingleton, 
      46         sortDict, convertSize, getUidGid, changeOwnership, splitCommandLine, 
      47         resolveCommand, executeCommand, calculateFileAge, encodePath, nullDevice, 
      48         deriveDayOfWeek, isStartOfWeek, buildNormalizedPath, 
      49         ISO_SECTOR_SIZE, BYTES_PER_SECTOR, 
      50         BYTES_PER_KBYTE, BYTES_PER_MBYTE, BYTES_PER_GBYTE, KBYTES_PER_MBYTE, MBYTES_PER_GBYTE, 
      51         SECONDS_PER_MINUTE, MINUTES_PER_HOUR, HOURS_PER_DAY, SECONDS_PER_DAY, 
      52         UNIT_BYTES, UNIT_KBYTES, UNIT_MBYTES, UNIT_GBYTES, UNIT_SECTORS 
      53   
      54  @var ISO_SECTOR_SIZE: Size of an ISO image sector, in bytes. 
      55  @var BYTES_PER_SECTOR: Number of bytes (B) per ISO sector. 
      56  @var BYTES_PER_KBYTE: Number of bytes (B) per kilobyte (kB). 
      57  @var BYTES_PER_MBYTE: Number of bytes (B) per megabyte (MB). 
      58  @var BYTES_PER_GBYTE: Number of bytes (B) per megabyte (GB). 
      59  @var KBYTES_PER_MBYTE: Number of kilobytes (kB) per megabyte (MB). 
      60  @var MBYTES_PER_GBYTE: Number of megabytes (MB) per gigabyte (GB). 
      61  @var SECONDS_PER_MINUTE: Number of seconds per minute. 
      62  @var MINUTES_PER_HOUR: Number of minutes per hour. 
      63  @var HOURS_PER_DAY: Number of hours per day. 
      64  @var SECONDS_PER_DAY: Number of seconds per day. 
      65  @var UNIT_BYTES: Constant representing the byte (B) unit for conversion. 
      66  @var UNIT_KBYTES: Constant representing the kilobyte (kB) unit for conversion. 
      67  @var UNIT_MBYTES: Constant representing the megabyte (MB) unit for conversion. 
      68  @var UNIT_GBYTES: Constant representing the gigabyte (GB) unit for conversion. 
      69  @var UNIT_SECTORS: Constant representing the ISO sector unit for conversion. 
      70   
      71  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      72  """ 
      73   
      74   
      75  ######################################################################## 
      76  # Imported modules 
      77  ######################################################################## 
      78   
      79  import sys 
      80  import math 
      81  import os 
      82  import re 
      83  import time 
      84  import logging 
      85  import string  # pylint: disable=W0402 
      86  from subprocess import Popen, STDOUT, PIPE 
      87   
      88  try: 
      89     import pwd 
      90     import grp 
      91     _UID_GID_AVAILABLE = True 
      92  except ImportError: 
      93     _UID_GID_AVAILABLE = False 
      94   
      95  from CedarBackup2.release import VERSION, DATE 
      96   
      97   
      98  ######################################################################## 
      99  # Module-wide constants and variables 
     100  ######################################################################## 
     101   
     102  logger = logging.getLogger("CedarBackup2.log.util") 
     103  outputLogger = logging.getLogger("CedarBackup2.output") 
     104   
     105  ISO_SECTOR_SIZE    = 2048.0   # in bytes 
     106  BYTES_PER_SECTOR   = ISO_SECTOR_SIZE 
     107   
     108  BYTES_PER_KBYTE    = 1024.0 
     109  KBYTES_PER_MBYTE   = 1024.0 
     110  MBYTES_PER_GBYTE   = 1024.0 
     111  BYTES_PER_MBYTE    = BYTES_PER_KBYTE * KBYTES_PER_MBYTE 
     112  BYTES_PER_GBYTE    = BYTES_PER_MBYTE * MBYTES_PER_GBYTE 
     113   
     114  SECONDS_PER_MINUTE = 60.0 
     115  MINUTES_PER_HOUR   = 60.0 
     116  HOURS_PER_DAY      = 24.0 
     117  SECONDS_PER_DAY    = SECONDS_PER_MINUTE * MINUTES_PER_HOUR * HOURS_PER_DAY 
     118   
     119  UNIT_BYTES         = 0 
     120  UNIT_KBYTES        = 1 
     121  UNIT_MBYTES        = 2 
     122  UNIT_GBYTES        = 4 
     123  UNIT_SECTORS       = 3 
     124   
     125  MTAB_FILE          = "/etc/mtab" 
     126   
     127  MOUNT_COMMAND      = [ "mount", ] 
     128  UMOUNT_COMMAND     = [ "umount", ] 
     129   
     130  DEFAULT_LANGUAGE   = "C" 
     131  LANG_VAR           = "LANG" 
     132  LOCALE_VARS        = [ "LC_ADDRESS", "LC_ALL", "LC_COLLATE", 
     133                         "LC_CTYPE", "LC_IDENTIFICATION", 
     134                         "LC_MEASUREMENT", "LC_MESSAGES", 
     135                         "LC_MONETARY", "LC_NAME", "LC_NUMERIC", 
     136                         "LC_PAPER", "LC_TELEPHONE", "LC_TIME", ] 
    
    137 138 139 ######################################################################## 140 # UnorderedList class definition 141 ######################################################################## 142 143 -class UnorderedList(list):
    144 145 """ 146 Class representing an "unordered list". 147 148 An "unordered list" is a list in which only the contents matter, not the 149 order in which the contents appear in the list. 150 151 For instance, we might be keeping track of set of paths in a list, because 152 it's convenient to have them in that form. However, for comparison 153 purposes, we would only care that the lists contain exactly the same 154 contents, regardless of order. 155 156 I have come up with two reasonable ways of doing this, plus a couple more 157 that would work but would be a pain to implement. My first method is to 158 copy and sort each list, comparing the sorted versions. This will only work 159 if two lists with exactly the same members are guaranteed to sort in exactly 160 the same order. The second way would be to create two Sets and then compare 161 the sets. However, this would lose information about any duplicates in 162 either list. I've decided to go with option #1 for now. I'll modify this 163 code if I run into problems in the future. 164 165 We override the original C{__eq__}, C{__ne__}, C{__ge__}, C{__gt__}, 166 C{__le__} and C{__lt__} list methods to change the definition of the various 167 comparison operators. In all cases, the comparison is changed to return the 168 result of the original operation I{but instead comparing sorted lists}. 169 This is going to be quite a bit slower than a normal list, so you probably 170 only want to use it on small lists. 171 """ 172
    173 - def __eq__(self, other):
    174 """ 175 Definition of C{==} operator for this class. 176 @param other: Other object to compare to. 177 @return: True/false depending on whether C{self == other}. 178 """ 179 if other is None: 180 return False 181 selfSorted = self[:] 182 otherSorted = other[:] 183 selfSorted.sort() 184 otherSorted.sort() 185 return selfSorted.__eq__(otherSorted)
    186
    187 - def __ne__(self, other):
    188 """ 189 Definition of C{!=} operator for this class. 190 @param other: Other object to compare to. 191 @return: True/false depending on whether C{self != other}. 192 """ 193 if other is None: 194 return True 195 selfSorted = self[:] 196 otherSorted = other[:] 197 selfSorted.sort() 198 otherSorted.sort() 199 return selfSorted.__ne__(otherSorted)
    200
    201 - def __ge__(self, other):
    202 """ 203 Definition of S{>=} operator for this class. 204 @param other: Other object to compare to. 205 @return: True/false depending on whether C{self >= other}. 206 """ 207 if other is None: 208 return True 209 selfSorted = self[:] 210 otherSorted = other[:] 211 selfSorted.sort() 212 otherSorted.sort() 213 return selfSorted.__ge__(otherSorted)
    214
    215 - def __gt__(self, other):
    216 """ 217 Definition of C{>} operator for this class. 218 @param other: Other object to compare to. 219 @return: True/false depending on whether C{self > other}. 220 """ 221 if other is None: 222 return True 223 selfSorted = self[:] 224 otherSorted = other[:] 225 selfSorted.sort() 226 otherSorted.sort() 227 return selfSorted.__gt__(otherSorted)
    228
    229 - def __le__(self, other):
    230 """ 231 Definition of S{<=} operator for this class. 232 @param other: Other object to compare to. 233 @return: True/false depending on whether C{self <= other}. 234 """ 235 if other is None: 236 return False 237 selfSorted = self[:] 238 otherSorted = other[:] 239 selfSorted.sort() 240 otherSorted.sort() 241 return selfSorted.__le__(otherSorted)
    242
    243 - def __lt__(self, other):
    244 """ 245 Definition of C{<} operator for this class. 246 @param other: Other object to compare to. 247 @return: True/false depending on whether C{self < other}. 248 """ 249 if other is None: 250 return False 251 selfSorted = self[:] 252 otherSorted = other[:] 253 selfSorted.sort() 254 otherSorted.sort() 255 return selfSorted.__lt__(otherSorted)
    256
    257 258 ######################################################################## 259 # AbsolutePathList class definition 260 ######################################################################## 261 262 -class AbsolutePathList(UnorderedList):
    263 264 """ 265 Class representing a list of absolute paths. 266 267 This is an unordered list. 268 269 We override the C{append}, C{insert} and C{extend} methods to ensure that 270 any item added to the list is an absolute path. 271 272 Each item added to the list is encoded using L{encodePath}. If we don't do 273 this, we have problems trying certain operations between strings and unicode 274 objects, particularly for "odd" filenames that can't be encoded in standard 275 ASCII. 276 """ 277
    278 - def append(self, item):
    279 """ 280 Overrides the standard C{append} method. 281 @raise ValueError: If item is not an absolute path. 282 """ 283 if not os.path.isabs(item): 284 raise ValueError("Not an absolute path: [%s]" % item) 285 list.append(self, encodePath(item))
    286
    287 - def insert(self, index, item):
    288 """ 289 Overrides the standard C{insert} method. 290 @raise ValueError: If item is not an absolute path. 291 """ 292 if not os.path.isabs(item): 293 raise ValueError("Not an absolute path: [%s]" % item) 294 list.insert(self, index, encodePath(item))
    295
    296 - def extend(self, seq):
    297 """ 298 Overrides the standard C{insert} method. 299 @raise ValueError: If any item is not an absolute path. 300 """ 301 for item in seq: 302 if not os.path.isabs(item): 303 raise ValueError("Not an absolute path: [%s]" % item) 304 for item in seq: 305 list.append(self, encodePath(item))
    306
    307 308 ######################################################################## 309 # ObjectTypeList class definition 310 ######################################################################## 311 312 -class ObjectTypeList(UnorderedList):
    313 314 """ 315 Class representing a list containing only objects with a certain type. 316 317 This is an unordered list. 318 319 We override the C{append}, C{insert} and C{extend} methods to ensure that 320 any item added to the list matches the type that is requested. The 321 comparison uses the built-in C{isinstance}, which should allow subclasses of 322 of the requested type to be added to the list as well. 323 324 The C{objectName} value will be used in exceptions, i.e. C{"Item must be a 325 CollectDir object."} if C{objectName} is C{"CollectDir"}. 326 """ 327
    328 - def __init__(self, objectType, objectName):
    329 """ 330 Initializes a typed list for a particular type. 331 @param objectType: Type that the list elements must match. 332 @param objectName: Short string containing the "name" of the type. 333 """ 334 super(ObjectTypeList, self).__init__() 335 self.objectType = objectType 336 self.objectName = objectName
    337
    338 - def append(self, item):
    339 """ 340 Overrides the standard C{append} method. 341 @raise ValueError: If item does not match requested type. 342 """ 343 if not isinstance(item, self.objectType): 344 raise ValueError("Item must be a %s object." % self.objectName) 345 list.append(self, item)
    346
    347 - def insert(self, index, item):
    348 """ 349 Overrides the standard C{insert} method. 350 @raise ValueError: If item does not match requested type. 351 """ 352 if not isinstance(item, self.objectType): 353 raise ValueError("Item must be a %s object." % self.objectName) 354 list.insert(self, index, item)
    355
    356 - def extend(self, seq):
    357 """ 358 Overrides the standard C{insert} method. 359 @raise ValueError: If item does not match requested type. 360 """ 361 for item in seq: 362 if not isinstance(item, self.objectType): 363 raise ValueError("All items must be %s objects." % self.objectName) 364 list.extend(self, seq)
    365
    366 367 ######################################################################## 368 # RestrictedContentList class definition 369 ######################################################################## 370 371 -class RestrictedContentList(UnorderedList):
    372 373 """ 374 Class representing a list containing only object with certain values. 375 376 This is an unordered list. 377 378 We override the C{append}, C{insert} and C{extend} methods to ensure that 379 any item added to the list is among the valid values. We use a standard 380 comparison, so pretty much anything can be in the list of valid values. 381 382 The C{valuesDescr} value will be used in exceptions, i.e. C{"Item must be 383 one of values in VALID_ACTIONS"} if C{valuesDescr} is C{"VALID_ACTIONS"}. 384 385 @note: This class doesn't make any attempt to trap for nonsensical 386 arguments. All of the values in the values list should be of the same type 387 (i.e. strings). Then, all list operations also need to be of that type 388 (i.e. you should always insert or append just strings). If you mix types -- 389 for instance lists and strings -- you will likely see AttributeError 390 exceptions or other problems. 391 """ 392
    393 - def __init__(self, valuesList, valuesDescr, prefix=None):
    394 """ 395 Initializes a list restricted to containing certain values. 396 @param valuesList: List of valid values. 397 @param valuesDescr: Short string describing list of values. 398 @param prefix: Prefix to use in error messages (None results in prefix "Item") 399 """ 400 super(RestrictedContentList, self).__init__() 401 self.prefix = "Item" 402 if prefix is not None: self.prefix = prefix 403 self.valuesList = valuesList 404 self.valuesDescr = valuesDescr
    405
    406 - def append(self, item):
    407 """ 408 Overrides the standard C{append} method. 409 @raise ValueError: If item is not in the values list. 410 """ 411 if item not in self.valuesList: 412 raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) 413 list.append(self, item)
    414
    415 - def insert(self, index, item):
    416 """ 417 Overrides the standard C{insert} method. 418 @raise ValueError: If item is not in the values list. 419 """ 420 if item not in self.valuesList: 421 raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) 422 list.insert(self, index, item)
    423
    424 - def extend(self, seq):
    425 """ 426 Overrides the standard C{insert} method. 427 @raise ValueError: If item is not in the values list. 428 """ 429 for item in seq: 430 if item not in self.valuesList: 431 raise ValueError("%s must be one of the values in %s." % (self.prefix, self.valuesDescr)) 432 list.extend(self, seq)
    433
    434 435 ######################################################################## 436 # RegexMatchList class definition 437 ######################################################################## 438 439 -class RegexMatchList(UnorderedList):
    440 441 """ 442 Class representing a list containing only strings that match a regular expression. 443 444 If C{emptyAllowed} is passed in as C{False}, then empty strings are 445 explicitly disallowed, even if they happen to match the regular expression. 446 (C{None} values are always disallowed, since string operations are not 447 permitted on C{None}.) 448 449 This is an unordered list. 450 451 We override the C{append}, C{insert} and C{extend} methods to ensure that 452 any item added to the list matches the indicated regular expression. 453 454 @note: If you try to put values that are not strings into the list, you will 455 likely get either TypeError or AttributeError exceptions as a result. 456 """ 457
    458 - def __init__(self, valuesRegex, emptyAllowed=True, prefix=None):
    459 """ 460 Initializes a list restricted to containing certain values. 461 @param valuesRegex: Regular expression that must be matched, as a string 462 @param emptyAllowed: Indicates whether empty or None values are allowed. 463 @param prefix: Prefix to use in error messages (None results in prefix "Item") 464 """ 465 super(RegexMatchList, self).__init__() 466 self.prefix = "Item" 467 if prefix is not None: self.prefix = prefix 468 self.valuesRegex = valuesRegex 469 self.emptyAllowed = emptyAllowed 470 self.pattern = re.compile(self.valuesRegex)
    471
    472 - def append(self, item):
    473 """ 474 Overrides the standard C{append} method. 475 @raise ValueError: If item is None 476 @raise ValueError: If item is empty and empty values are not allowed 477 @raise ValueError: If item does not match the configured regular expression 478 """ 479 if item is None or (not self.emptyAllowed and item == ""): 480 raise ValueError("%s cannot be empty." % self.prefix) 481 if not self.pattern.search(item): 482 raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) 483 list.append(self, item)
    484
    485 - def insert(self, index, item):
    486 """ 487 Overrides the standard C{insert} method. 488 @raise ValueError: If item is None 489 @raise ValueError: If item is empty and empty values are not allowed 490 @raise ValueError: If item does not match the configured regular expression 491 """ 492 if item is None or (not self.emptyAllowed and item == ""): 493 raise ValueError("%s cannot be empty." % self.prefix) 494 if not self.pattern.search(item): 495 raise ValueError("%s is not valid [%s]" % (self.prefix, item)) 496 list.insert(self, index, item)
    497
    498 - def extend(self, seq):
    499 """ 500 Overrides the standard C{insert} method. 501 @raise ValueError: If any item is None 502 @raise ValueError: If any item is empty and empty values are not allowed 503 @raise ValueError: If any item does not match the configured regular expression 504 """ 505 for item in seq: 506 if item is None or (not self.emptyAllowed and item == ""): 507 raise ValueError("%s cannot be empty." % self.prefix) 508 if not self.pattern.search(item): 509 raise ValueError("%s is not valid: [%s]" % (self.prefix, item)) 510 list.extend(self, seq)
    511
    512 513 ######################################################################## 514 # RegexList class definition 515 ######################################################################## 516 517 -class RegexList(UnorderedList):
    518 519 """ 520 Class representing a list of valid regular expression strings. 521 522 This is an unordered list. 523 524 We override the C{append}, C{insert} and C{extend} methods to ensure that 525 any item added to the list is a valid regular expression. 526 """ 527
    528 - def append(self, item):
    529 """ 530 Overrides the standard C{append} method. 531 @raise ValueError: If item is not an absolute path. 532 """ 533 try: 534 re.compile(item) 535 except re.error: 536 raise ValueError("Not a valid regular expression: [%s]" % item) 537 list.append(self, item)
    538
    539 - def insert(self, index, item):
    540 """ 541 Overrides the standard C{insert} method. 542 @raise ValueError: If item is not an absolute path. 543 """ 544 try: 545 re.compile(item) 546 except re.error: 547 raise ValueError("Not a valid regular expression: [%s]" % item) 548 list.insert(self, index, item)
    549
    550 - def extend(self, seq):
    551 """ 552 Overrides the standard C{insert} method. 553 @raise ValueError: If any item is not an absolute path. 554 """ 555 for item in seq: 556 try: 557 re.compile(item) 558 except re.error: 559 raise ValueError("Not a valid regular expression: [%s]" % item) 560 for item in seq: 561 list.append(self, item)
    562
    563 564 ######################################################################## 565 # Directed graph implementation 566 ######################################################################## 567 568 -class _Vertex(object):
    569 570 """ 571 Represents a vertex (or node) in a directed graph. 572 """ 573
    574 - def __init__(self, name):
    575 """ 576 Constructor. 577 @param name: Name of this graph vertex. 578 @type name: String value. 579 """ 580 self.name = name 581 self.endpoints = [] 582 self.state = None
    583
    584 -class DirectedGraph(object):
    585 586 """ 587 Represents a directed graph. 588 589 A graph B{G=(V,E)} consists of a set of vertices B{V} together with a set 590 B{E} of vertex pairs or edges. In a directed graph, each edge also has an 591 associated direction (from vertext B{v1} to vertex B{v2}). A C{DirectedGraph} 592 object provides a way to construct a directed graph and execute a depth- 593 first search. 594 595 This data structure was designed based on the graphing chapter in 596 U{The Algorithm Design Manual<http://www2.toki.or.id/book/AlgDesignManual/>}, 597 by Steven S. Skiena. 598 599 This class is intended to be used by Cedar Backup for dependency ordering. 600 Because of this, it's not quite general-purpose. Unlike a "general" graph, 601 every vertex in this graph has at least one edge pointing to it, from a 602 special "start" vertex. This is so no vertices get "lost" either because 603 they have no dependencies or because nothing depends on them. 604 """ 605 606 _UNDISCOVERED = 0 607 _DISCOVERED = 1 608 _EXPLORED = 2 609
    610 - def __init__(self, name):
    611 """ 612 Directed graph constructor. 613 614 @param name: Name of this graph. 615 @type name: String value. 616 """ 617 if name is None or name == "": 618 raise ValueError("Graph name must be non-empty.") 619 self._name = name 620 self._vertices = {} 621 self._startVertex = _Vertex(None) # start vertex is only vertex with no name
    622
    623 - def __repr__(self):
    624 """ 625 Official string representation for class instance. 626 """ 627 return "DirectedGraph(%s)" % self.name
    628
    629 - def __str__(self):
    630 """ 631 Informal string representation for class instance. 632 """ 633 return self.__repr__()
    634
    635 - def __cmp__(self, other):
    636 """ 637 Definition of equals operator for this class. 638 @param other: Other object to compare to. 639 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 640 """ 641 # pylint: disable=W0212 642 if other is None: 643 return 1 644 if self.name != other.name: 645 if self.name < other.name: 646 return -1 647 else: 648 return 1 649 if self._vertices != other._vertices: 650 if self._vertices < other._vertices: 651 return -1 652 else: 653 return 1 654 return 0
    655
    656 - def _getName(self):
    657 """ 658 Property target used to get the graph name. 659 """ 660 return self._name
    661 662 name = property(_getName, None, None, "Name of the graph.") 663
    664 - def createVertex(self, name):
    665 """ 666 Creates a named vertex. 667 @param name: vertex name 668 @raise ValueError: If the vertex name is C{None} or empty. 669 """ 670 if name is None or name == "": 671 raise ValueError("Vertex name must be non-empty.") 672 vertex = _Vertex(name) 673 self._startVertex.endpoints.append(vertex) # so every vertex is connected at least once 674 self._vertices[name] = vertex
    675
    676 - def createEdge(self, start, finish):
    677 """ 678 Adds an edge with an associated direction, from C{start} vertex to C{finish} vertex. 679 @param start: Name of start vertex. 680 @param finish: Name of finish vertex. 681 @raise ValueError: If one of the named vertices is unknown. 682 """ 683 try: 684 startVertex = self._vertices[start] 685 finishVertex = self._vertices[finish] 686 startVertex.endpoints.append(finishVertex) 687 except KeyError, e: 688 raise ValueError("Vertex [%s] could not be found." % e)
    689
    690 - def topologicalSort(self):
    691 """ 692 Implements a topological sort of the graph. 693 694 This method also enforces that the graph is a directed acyclic graph, 695 which is a requirement of a topological sort. 696 697 A directed acyclic graph (or "DAG") is a directed graph with no directed 698 cycles. A topological sort of a DAG is an ordering on the vertices such 699 that all edges go from left to right. Only an acyclic graph can have a 700 topological sort, but any DAG has at least one topological sort. 701 702 Since a topological sort only makes sense for an acyclic graph, this 703 method throws an exception if a cycle is found. 704 705 A depth-first search only makes sense if the graph is acyclic. If the 706 graph contains any cycles, it is not possible to determine a consistent 707 ordering for the vertices. 708 709 @note: If a particular vertex has no edges, then its position in the 710 final list depends on the order in which the vertices were created in the 711 graph. If you're using this method to determine a dependency order, this 712 makes sense: a vertex with no dependencies can go anywhere (and will). 713 714 @return: Ordering on the vertices so that all edges go from left to right. 715 716 @raise ValueError: If a cycle is found in the graph. 717 """ 718 ordering = [] 719 for key in self._vertices: 720 vertex = self._vertices[key] 721 vertex.state = self._UNDISCOVERED 722 for key in self._vertices: 723 vertex = self._vertices[key] 724 if vertex.state == self._UNDISCOVERED: 725 self._topologicalSort(self._startVertex, ordering) 726 return ordering
    727
    728 - def _topologicalSort(self, vertex, ordering):
    729 """ 730 Recursive depth first search function implementing topological sort. 731 @param vertex: Vertex to search 732 @param ordering: List of vertices in proper order 733 """ 734 vertex.state = self._DISCOVERED 735 for endpoint in vertex.endpoints: 736 if endpoint.state == self._UNDISCOVERED: 737 self._topologicalSort(endpoint, ordering) 738 elif endpoint.state != self._EXPLORED: 739 raise ValueError("Cycle found in graph (found '%s' while searching '%s')." % (vertex.name, endpoint.name)) 740 if vertex.name is not None: 741 ordering.insert(0, vertex.name) 742 vertex.state = self._EXPLORED
    743
    744 745 ######################################################################## 746 # PathResolverSingleton class definition 747 ######################################################################## 748 749 -class PathResolverSingleton(object):
    750 751 """ 752 Singleton used for resolving executable paths. 753 754 Various functions throughout Cedar Backup (including extensions) need a way 755 to resolve the path of executables that they use. For instance, the image 756 functionality needs to find the C{mkisofs} executable, and the Subversion 757 extension needs to find the C{svnlook} executable. Cedar Backup's original 758 behavior was to assume that the simple name (C{"svnlook"} or whatever) was 759 available on the caller's C{$PATH}, and to fail otherwise. However, this 760 turns out to be less than ideal, since for instance the root user might not 761 always have executables like C{svnlook} in its path. 762 763 One solution is to specify a path (either via an absolute path or some sort 764 of path insertion or path appending mechanism) that would apply to the 765 C{executeCommand()} function. This is not difficult to implement, but it 766 seem like kind of a "big hammer" solution. Besides that, it might also 767 represent a security flaw (for instance, I prefer not to mess with root's 768 C{$PATH} on the application level if I don't have to). 769 770 The alternative is to set up some sort of configuration for the path to 771 certain executables, i.e. "find C{svnlook} in C{/usr/local/bin/svnlook}" or 772 whatever. This PathResolverSingleton aims to provide a good solution to the 773 mapping problem. Callers of all sorts (extensions or not) can get an 774 instance of the singleton. Then, they call the C{lookup} method to try and 775 resolve the executable they are looking for. Through the C{lookup} method, 776 the caller can also specify a default to use if a mapping is not found. 777 This way, with no real effort on the part of the caller, behavior can neatly 778 degrade to something equivalent to the current behavior if there is no 779 special mapping or if the singleton was never initialized in the first 780 place. 781 782 Even better, extensions automagically get access to the same resolver 783 functionality, and they don't even need to understand how the mapping 784 happens. All extension authors need to do is document what executables 785 their code requires, and the standard resolver configuration section will 786 meet their needs. 787 788 The class should be initialized once through the constructor somewhere in 789 the main routine. Then, the main routine should call the L{fill} method to 790 fill in the resolver's internal structures. Everyone else who needs to 791 resolve a path will get an instance of the class using L{getInstance} and 792 will then just call the L{lookup} method. 793 794 @cvar _instance: Holds a reference to the singleton 795 @ivar _mapping: Internal mapping from resource name to path. 796 """ 797 798 _instance = None # Holds a reference to singleton instance 799
    800 - class _Helper(object):
    801 """Helper class to provide a singleton factory method."""
    802 - def __init__(self):
    803 pass
    804 - def __call__(self, *args, **kw):
    805 # pylint: disable=W0212,R0201 806 if PathResolverSingleton._instance is None: 807 obj = PathResolverSingleton() 808 PathResolverSingleton._instance = obj 809 return PathResolverSingleton._instance
    810 811 getInstance = _Helper() # Method that callers will use to get an instance 812
    813 - def __init__(self):
    814 """Singleton constructor, which just creates the singleton instance.""" 815 if PathResolverSingleton._instance is not None: 816 raise RuntimeError("Only one instance of PathResolverSingleton is allowed!") 817 PathResolverSingleton._instance = self 818 self._mapping = { }
    819
    820 - def lookup(self, name, default=None):
    821 """ 822 Looks up name and returns the resolved path associated with the name. 823 @param name: Name of the path resource to resolve. 824 @param default: Default to return if resource cannot be resolved. 825 @return: Resolved path associated with name, or default if name can't be resolved. 826 """ 827 value = default 828 if name in self._mapping.keys(): 829 value = self._mapping[name] 830 logger.debug("Resolved command [%s] to [%s].", name, value) 831 return value
    832
    833 - def fill(self, mapping):
    834 """ 835 Fills in the singleton's internal mapping from name to resource. 836 @param mapping: Mapping from resource name to path. 837 @type mapping: Dictionary mapping name to path, both as strings. 838 """ 839 self._mapping = { } 840 for key in mapping.keys(): 841 self._mapping[key] = mapping[key]
    842
    843 844 ######################################################################## 845 # Pipe class definition 846 ######################################################################## 847 848 -class Pipe(Popen):
    849 """ 850 Specialized pipe class for use by C{executeCommand}. 851 852 The L{executeCommand} function needs a specialized way of interacting 853 with a pipe. First, C{executeCommand} only reads from the pipe, and 854 never writes to it. Second, C{executeCommand} needs a way to discard all 855 output written to C{stderr}, as a means of simulating the shell 856 C{2>/dev/null} construct. 857 """
    858 - def __init__(self, cmd, bufsize=-1, ignoreStderr=False):
    859 stderr = STDOUT 860 if ignoreStderr: 861 devnull = nullDevice() 862 stderr = os.open(devnull, os.O_RDWR) 863 Popen.__init__(self, shell=False, args=cmd, bufsize=bufsize, stdin=None, stdout=PIPE, stderr=stderr)
    864
    865 866 ######################################################################## 867 # Diagnostics class definition 868 ######################################################################## 869 870 -class Diagnostics(object):
    871 872 """ 873 Class holding runtime diagnostic information. 874 875 Diagnostic information is information that is useful to get from users for 876 debugging purposes. I'm consolidating it all here into one object. 877 878 @sort: __init__, __repr__, __str__ 879 """ 880 # pylint: disable=R0201 881
    882 - def __init__(self):
    883 """ 884 Constructor for the C{Diagnostics} class. 885 """
    886
    887 - def __repr__(self):
    888 """ 889 Official string representation for class instance. 890 """ 891 return "Diagnostics()"
    892
    893 - def __str__(self):
    894 """ 895 Informal string representation for class instance. 896 """ 897 return self.__repr__()
    898
    899 - def getValues(self):
    900 """ 901 Get a map containing all of the diagnostic values. 902 @return: Map from diagnostic name to diagnostic value. 903 """ 904 values = {} 905 values['version'] = self.version 906 values['interpreter'] = self.interpreter 907 values['platform'] = self.platform 908 values['encoding'] = self.encoding 909 values['locale'] = self.locale 910 values['timestamp'] = self.timestamp 911 return values
    912
    913 - def printDiagnostics(self, fd=sys.stdout, prefix=""):
    914 """ 915 Pretty-print diagnostic information to a file descriptor. 916 @param fd: File descriptor used to print information. 917 @param prefix: Prefix string (if any) to place onto printed lines 918 @note: The C{fd} is used rather than C{print} to facilitate unit testing. 919 """ 920 lines = self._buildDiagnosticLines(prefix) 921 for line in lines: 922 fd.write("%s\n" % line)
    923
    924 - def logDiagnostics(self, method, prefix=""):
    925 """ 926 Pretty-print diagnostic information using a logger method. 927 @param method: Logger method to use for logging (i.e. logger.info) 928 @param prefix: Prefix string (if any) to place onto printed lines 929 """ 930 lines = self._buildDiagnosticLines(prefix) 931 for line in lines: 932 method("%s" % line)
    933
    934 - def _buildDiagnosticLines(self, prefix=""):
    935 """ 936 Build a set of pretty-printed diagnostic lines. 937 @param prefix: Prefix string (if any) to place onto printed lines 938 @return: List of strings, not terminated by newlines. 939 """ 940 values = self.getValues() 941 keys = values.keys() 942 keys.sort() 943 tmax = Diagnostics._getMaxLength(keys) + 3 # three extra dots in output 944 lines = [] 945 for key in keys: 946 title = key.title() 947 title += (tmax - len(title)) * '.' 948 value = values[key] 949 line = "%s%s: %s" % (prefix, title, value) 950 lines.append(line) 951 return lines
    952 953 @staticmethod
    954 - def _getMaxLength(values):
    955 """ 956 Get the maximum length from among a list of strings. 957 """ 958 tmax = 0 959 for value in values: 960 if len(value) > tmax: 961 tmax = len(value) 962 return tmax
    963
    964 - def _getVersion(self):
    965 """ 966 Property target to get the Cedar Backup version. 967 """ 968 return "Cedar Backup %s (%s)" % (VERSION, DATE)
    969
    970 - def _getInterpreter(self):
    971 """ 972 Property target to get the Python interpreter version. 973 """ 974 version = sys.version_info 975 return "Python %d.%d.%d (%s)" % (version[0], version[1], version[2], version[3])
    976
    977 - def _getEncoding(self):
    978 """ 979 Property target to get the filesystem encoding. 980 """ 981 return sys.getfilesystemencoding() or sys.getdefaultencoding()
    982
    983 - def _getPlatform(self):
    984 """ 985 Property target to get the operating system platform. 986 """ 987 try: 988 if sys.platform.startswith("win"): 989 windowsPlatforms = [ "Windows 3.1", "Windows 95/98/ME", "Windows NT/2000/XP", "Windows CE", ] 990 wininfo = sys.getwindowsversion() # pylint: disable=E1101 991 winversion = "%d.%d.%d" % (wininfo[0], wininfo[1], wininfo[2]) 992 winplatform = windowsPlatforms[wininfo[3]] 993 wintext = wininfo[4] # i.e. "Service Pack 2" 994 return "%s (%s %s %s)" % (sys.platform, winplatform, winversion, wintext) 995 else: 996 uname = os.uname() 997 sysname = uname[0] # i.e. Linux 998 release = uname[2] # i.e. 2.16.18-2 999 machine = uname[4] # i.e. i686 1000 return "%s (%s %s %s)" % (sys.platform, sysname, release, machine) 1001 except: 1002 return sys.platform
    1003
    1004 - def _getLocale(self):
    1005 """ 1006 Property target to get the default locale that is in effect. 1007 """ 1008 try: 1009 import locale 1010 return locale.getdefaultlocale()[0] 1011 except: 1012 return "(unknown)"
    1013
    1014 - def _getTimestamp(self):
    1015 """ 1016 Property target to get a current date/time stamp. 1017 """ 1018 try: 1019 import datetime 1020 return datetime.datetime.utcnow().ctime() + " UTC" 1021 except: 1022 return "(unknown)"
    1023 1024 version = property(_getVersion, None, None, "Cedar Backup version.") 1025 interpreter = property(_getInterpreter, None, None, "Python interpreter version.") 1026 platform = property(_getPlatform, None, None, "Platform identifying information.") 1027 encoding = property(_getEncoding, None, None, "Filesystem encoding that is in effect.") 1028 locale = property(_getLocale, None, None, "Locale that is in effect.") 1029 timestamp = property(_getTimestamp, None, None, "Current timestamp.")
    1030
    1031 1032 ######################################################################## 1033 # General utility functions 1034 ######################################################################## 1035 1036 ###################### 1037 # sortDict() function 1038 ###################### 1039 1040 -def sortDict(d):
    1041 """ 1042 Returns the keys of the dictionary sorted by value. 1043 1044 There are cuter ways to do this in Python 2.4, but we were originally 1045 attempting to stay compatible with Python 2.3. 1046 1047 @param d: Dictionary to operate on 1048 @return: List of dictionary keys sorted in order by dictionary value. 1049 """ 1050 items = d.items() 1051 items.sort(lambda x, y: cmp(x[1], y[1])) 1052 return [key for key, value in items]
    1053
    1054 1055 ######################## 1056 # removeKeys() function 1057 ######################## 1058 1059 -def removeKeys(d, keys):
    1060 """ 1061 Removes all of the keys from the dictionary. 1062 The dictionary is altered in-place. 1063 Each key must exist in the dictionary. 1064 @param d: Dictionary to operate on 1065 @param keys: List of keys to remove 1066 @raise KeyError: If one of the keys does not exist 1067 """ 1068 for key in keys: 1069 del d[key]
    1070
    1071 1072 ######################### 1073 # convertSize() function 1074 ######################### 1075 1076 -def convertSize(size, fromUnit, toUnit):
    1077 """ 1078 Converts a size in one unit to a size in another unit. 1079 1080 This is just a convenience function so that the functionality can be 1081 implemented in just one place. Internally, we convert values to bytes and 1082 then to the final unit. 1083 1084 The available units are: 1085 1086 - C{UNIT_BYTES} - Bytes 1087 - C{UNIT_KBYTES} - Kilobytes, where 1 kB = 1024 B 1088 - C{UNIT_MBYTES} - Megabytes, where 1 MB = 1024 kB 1089 - C{UNIT_GBYTES} - Gigabytes, where 1 GB = 1024 MB 1090 - C{UNIT_SECTORS} - Sectors, where 1 sector = 2048 B 1091 1092 @param size: Size to convert 1093 @type size: Integer or float value in units of C{fromUnit} 1094 1095 @param fromUnit: Unit to convert from 1096 @type fromUnit: One of the units listed above 1097 1098 @param toUnit: Unit to convert to 1099 @type toUnit: One of the units listed above 1100 1101 @return: Number converted to new unit, as a float. 1102 @raise ValueError: If one of the units is invalid. 1103 """ 1104 if size is None: 1105 raise ValueError("Cannot convert size of None.") 1106 if fromUnit == UNIT_BYTES: 1107 byteSize = float(size) 1108 elif fromUnit == UNIT_KBYTES: 1109 byteSize = float(size) * BYTES_PER_KBYTE 1110 elif fromUnit == UNIT_MBYTES: 1111 byteSize = float(size) * BYTES_PER_MBYTE 1112 elif fromUnit == UNIT_GBYTES: 1113 byteSize = float(size) * BYTES_PER_GBYTE 1114 elif fromUnit == UNIT_SECTORS: 1115 byteSize = float(size) * BYTES_PER_SECTOR 1116 else: 1117 raise ValueError("Unknown 'from' unit %s." % fromUnit) 1118 if toUnit == UNIT_BYTES: 1119 return byteSize 1120 elif toUnit == UNIT_KBYTES: 1121 return byteSize / BYTES_PER_KBYTE 1122 elif toUnit == UNIT_MBYTES: 1123 return byteSize / BYTES_PER_MBYTE 1124 elif toUnit == UNIT_GBYTES: 1125 return byteSize / BYTES_PER_GBYTE 1126 elif toUnit == UNIT_SECTORS: 1127 return byteSize / BYTES_PER_SECTOR 1128 else: 1129 raise ValueError("Unknown 'to' unit %s." % toUnit)
    1130
    1131 1132 ########################## 1133 # displayBytes() function 1134 ########################## 1135 1136 -def displayBytes(bytes, digits=2): # pylint: disable=W0622
    1137 """ 1138 Format a byte quantity so it can be sensibly displayed. 1139 1140 It's rather difficult to look at a number like "72372224 bytes" and get any 1141 meaningful information out of it. It would be more useful to see something 1142 like "69.02 MB". That's what this function does. Any time you want to display 1143 a byte value, i.e.:: 1144 1145 print "Size: %s bytes" % bytes 1146 1147 Call this function instead:: 1148 1149 print "Size: %s" % displayBytes(bytes) 1150 1151 What comes out will be sensibly formatted. The indicated number of digits 1152 will be listed after the decimal point, rounded based on whatever rules are 1153 used by Python's standard C{%f} string format specifier. (Values less than 1 1154 kB will be listed in bytes and will not have a decimal point, since the 1155 concept of a fractional byte is nonsensical.) 1156 1157 @param bytes: Byte quantity. 1158 @type bytes: Integer number of bytes. 1159 1160 @param digits: Number of digits to display after the decimal point. 1161 @type digits: Integer value, typically 2-5. 1162 1163 @return: String, formatted for sensible display. 1164 """ 1165 if bytes is None: 1166 raise ValueError("Cannot display byte value of None.") 1167 bytes = float(bytes) 1168 if math.fabs(bytes) < BYTES_PER_KBYTE: 1169 fmt = "%.0f bytes" 1170 value = bytes 1171 elif math.fabs(bytes) < BYTES_PER_MBYTE: 1172 fmt = "%." + "%d" % digits + "f kB" 1173 value = bytes / BYTES_PER_KBYTE 1174 elif math.fabs(bytes) < BYTES_PER_GBYTE: 1175 fmt = "%." + "%d" % digits + "f MB" 1176 value = bytes / BYTES_PER_MBYTE 1177 else: 1178 fmt = "%." + "%d" % digits + "f GB" 1179 value = bytes / BYTES_PER_GBYTE 1180 return fmt % value 1181
    1182 1183 ################################## 1184 # getFunctionReference() function 1185 ################################## 1186 1187 -def getFunctionReference(module, function):
    1188 """ 1189 Gets a reference to a named function. 1190 1191 This does some hokey-pokey to get back a reference to a dynamically named 1192 function. For instance, say you wanted to get a reference to the 1193 C{os.path.isdir} function. You could use:: 1194 1195 myfunc = getFunctionReference("os.path", "isdir") 1196 1197 Although we won't bomb out directly, behavior is pretty much undefined if 1198 you pass in C{None} or C{""} for either C{module} or C{function}. 1199 1200 The only validation we enforce is that whatever we get back must be 1201 callable. 1202 1203 I derived this code based on the internals of the Python unittest 1204 implementation. I don't claim to completely understand how it works. 1205 1206 @param module: Name of module associated with function. 1207 @type module: Something like "os.path" or "CedarBackup2.util" 1208 1209 @param function: Name of function 1210 @type function: Something like "isdir" or "getUidGid" 1211 1212 @return: Reference to function associated with name. 1213 1214 @raise ImportError: If the function cannot be found. 1215 @raise ValueError: If the resulting reference is not callable. 1216 1217 @copyright: Some of this code, prior to customization, was originally part 1218 of the Python 2.3 codebase. Python code is copyright (c) 2001, 2002 Python 1219 Software Foundation; All Rights Reserved. 1220 """ 1221 parts = [] 1222 if module is not None and module != "": 1223 parts = module.split(".") 1224 if function is not None and function != "": 1225 parts.append(function) 1226 copy = parts[:] 1227 while copy: 1228 try: 1229 module = __import__(string.join(copy, ".")) 1230 break 1231 except ImportError: 1232 del copy[-1] 1233 if not copy: raise 1234 parts = parts[1:] 1235 obj = module 1236 for part in parts: 1237 obj = getattr(obj, part) 1238 if not callable(obj): 1239 raise ValueError("Reference to %s.%s is not callable." % (module, function)) 1240 return obj
    1241
    1242 1243 ####################### 1244 # getUidGid() function 1245 ####################### 1246 1247 -def getUidGid(user, group):
    1248 """ 1249 Get the uid/gid associated with a user/group pair 1250 1251 This is a no-op if user/group functionality is not available on the platform. 1252 1253 @param user: User name 1254 @type user: User name as a string 1255 1256 @param group: Group name 1257 @type group: Group name as a string 1258 1259 @return: Tuple C{(uid, gid)} matching passed-in user and group. 1260 @raise ValueError: If the ownership user/group values are invalid 1261 """ 1262 if _UID_GID_AVAILABLE: 1263 try: 1264 uid = pwd.getpwnam(user)[2] 1265 gid = grp.getgrnam(group)[2] 1266 return (uid, gid) 1267 except Exception, e: 1268 logger.debug("Error looking up uid and gid for [%s:%s]: %s", user, group, e) 1269 raise ValueError("Unable to lookup up uid and gid for passed in user/group.") 1270 else: 1271 return (0, 0)
    1272
    1273 1274 ############################# 1275 # changeOwnership() function 1276 ############################# 1277 1278 -def changeOwnership(path, user, group):
    1279 """ 1280 Changes ownership of path to match the user and group. 1281 1282 This is a no-op if user/group functionality is not available on the 1283 platform, or if the either passed-in user or group is C{None}. Further, we 1284 won't even try to do it unless running as root, since it's unlikely to work. 1285 1286 @param path: Path whose ownership to change. 1287 @param user: User which owns file. 1288 @param group: Group which owns file. 1289 """ 1290 if _UID_GID_AVAILABLE: 1291 if user is None or group is None: 1292 logger.debug("User or group is None, so not attempting to change owner on [%s].", path) 1293 elif not isRunningAsRoot(): 1294 logger.debug("Not root, so not attempting to change owner on [%s].", path) 1295 else: 1296 try: 1297 (uid, gid) = getUidGid(user, group) 1298 os.chown(path, uid, gid) 1299 except Exception, e: 1300 logger.error("Error changing ownership of [%s]: %s", path, e)
    1301
    1302 1303 ############################# 1304 # isRunningAsRoot() function 1305 ############################# 1306 1307 -def isRunningAsRoot():
    1308 """ 1309 Indicates whether the program is running as the root user. 1310 """ 1311 return os.getuid() == 0
    1312
    1313 1314 ############################## 1315 # splitCommandLine() function 1316 ############################## 1317 1318 -def splitCommandLine(commandLine):
    1319 """ 1320 Splits a command line string into a list of arguments. 1321 1322 Unfortunately, there is no "standard" way to parse a command line string, 1323 and it's actually not an easy problem to solve portably (essentially, we 1324 have to emulate the shell argument-processing logic). This code only 1325 respects double quotes (C{"}) for grouping arguments, not single quotes 1326 (C{'}). Make sure you take this into account when building your command 1327 line. 1328 1329 Incidentally, I found this particular parsing method while digging around in 1330 Google Groups, and I tweaked it for my own use. 1331 1332 @param commandLine: Command line string 1333 @type commandLine: String, i.e. "cback --verbose stage store" 1334 1335 @return: List of arguments, suitable for passing to C{popen2}. 1336 1337 @raise ValueError: If the command line is None. 1338 """ 1339 if commandLine is None: 1340 raise ValueError("Cannot split command line of None.") 1341 fields = re.findall('[^ "]+|"[^"]+"', commandLine) 1342 fields = [field.replace('"', '') for field in fields] 1343 return fields
    1344
    1345 1346 ############################ 1347 # resolveCommand() function 1348 ############################ 1349 1350 -def resolveCommand(command):
    1351 """ 1352 Resolves the real path to a command through the path resolver mechanism. 1353 1354 Both extensions and standard Cedar Backup functionality need a way to 1355 resolve the "real" location of various executables. Normally, they assume 1356 that these executables are on the system path, but some callers need to 1357 specify an alternate location. 1358 1359 Ideally, we want to handle this configuration in a central location. The 1360 Cedar Backup path resolver mechanism (a singleton called 1361 L{PathResolverSingleton}) provides the central location to store the 1362 mappings. This function wraps access to the singleton, and is what all 1363 functions (extensions or standard functionality) should call if they need to 1364 find a command. 1365 1366 The passed-in command must actually be a list, in the standard form used by 1367 all existing Cedar Backup code (something like C{["svnlook", ]}). The 1368 lookup will actually be done on the first element in the list, and the 1369 returned command will always be in list form as well. 1370 1371 If the passed-in command can't be resolved or no mapping exists, then the 1372 command itself will be returned unchanged. This way, we neatly fall back on 1373 default behavior if we have no sensible alternative. 1374 1375 @param command: Command to resolve. 1376 @type command: List form of command, i.e. C{["svnlook", ]}. 1377 1378 @return: Path to command or just command itself if no mapping exists. 1379 """ 1380 singleton = PathResolverSingleton.getInstance() 1381 name = command[0] 1382 result = command[:] 1383 result[0] = singleton.lookup(name, name) 1384 return result
    1385
    1386 1387 ############################ 1388 # executeCommand() function 1389 ############################ 1390 1391 -def executeCommand(command, args, returnOutput=False, ignoreStderr=False, doNotLog=False, outputFile=None):
    1392 """ 1393 Executes a shell command, hopefully in a safe way. 1394 1395 This function exists to replace direct calls to C{os.popen} in the Cedar 1396 Backup code. It's not safe to call a function such as C{os.popen()} with 1397 untrusted arguments, since that can cause problems if the string contains 1398 non-safe variables or other constructs (imagine that the argument is 1399 C{$WHATEVER}, but C{$WHATEVER} contains something like C{"; rm -fR ~/; 1400 echo"} in the current environment). 1401 1402 Instead, it's safer to pass a list of arguments in the style supported bt 1403 C{popen2} or C{popen4}. This function actually uses a specialized C{Pipe} 1404 class implemented using either C{subprocess.Popen} or C{popen2.Popen4}. 1405 1406 Under the normal case, this function will return a tuple of C{(status, 1407 None)} where the status is the wait-encoded return status of the call per 1408 the C{popen2.Popen4} documentation. If C{returnOutput} is passed in as 1409 C{True}, the function will return a tuple of C{(status, output)} where 1410 C{output} is a list of strings, one entry per line in the output from the 1411 command. Output is always logged to the C{outputLogger.info()} target, 1412 regardless of whether it's returned. 1413 1414 By default, C{stdout} and C{stderr} will be intermingled in the output. 1415 However, if you pass in C{ignoreStderr=True}, then only C{stdout} will be 1416 included in the output. 1417 1418 The C{doNotLog} parameter exists so that callers can force the function to 1419 not log command output to the debug log. Normally, you would want to log. 1420 However, if you're using this function to write huge output files (i.e. 1421 database backups written to C{stdout}) then you might want to avoid putting 1422 all that information into the debug log. 1423 1424 The C{outputFile} parameter exists to make it easier for a caller to push 1425 output into a file, i.e. as a substitute for redirection to a file. If this 1426 value is passed in, each time a line of output is generated, it will be 1427 written to the file using C{outputFile.write()}. At the end, the file 1428 descriptor will be flushed using C{outputFile.flush()}. The caller 1429 maintains responsibility for closing the file object appropriately. 1430 1431 @note: I know that it's a bit confusing that the command and the arguments 1432 are both lists. I could have just required the caller to pass in one big 1433 list. However, I think it makes some sense to keep the command (the 1434 constant part of what we're executing, i.e. C{"scp -B"}) separate from its 1435 arguments, even if they both end up looking kind of similar. 1436 1437 @note: You cannot redirect output via shell constructs (i.e. C{>file}, 1438 C{2>/dev/null}, etc.) using this function. The redirection string would be 1439 passed to the command just like any other argument. However, you can 1440 implement the equivalent to redirection using C{ignoreStderr} and 1441 C{outputFile}, as discussed above. 1442 1443 @note: The operating system environment is partially sanitized before 1444 the command is invoked. See L{sanitizeEnvironment} for details. 1445 1446 @param command: Shell command to execute 1447 @type command: List of individual arguments that make up the command 1448 1449 @param args: List of arguments to the command 1450 @type args: List of additional arguments to the command 1451 1452 @param returnOutput: Indicates whether to return the output of the command 1453 @type returnOutput: Boolean C{True} or C{False} 1454 1455 @param ignoreStderr: Whether stderr should be discarded 1456 @type ignoreStderr: Boolean True or False 1457 1458 @param doNotLog: Indicates that output should not be logged. 1459 @type doNotLog: Boolean C{True} or C{False} 1460 1461 @param outputFile: File object that all output should be written to. 1462 @type outputFile: File object as returned from C{open()} or C{file()}. 1463 1464 @return: Tuple of C{(result, output)} as described above. 1465 """ 1466 logger.debug("Executing command %s with args %s.", command, args) 1467 outputLogger.info("Executing command %s with args %s.", command, args) 1468 if doNotLog: 1469 logger.debug("Note: output will not be logged, per the doNotLog flag.") 1470 outputLogger.info("Note: output will not be logged, per the doNotLog flag.") 1471 output = [] 1472 fields = command[:] # make sure to copy it so we don't destroy it 1473 fields.extend(args) 1474 try: 1475 sanitizeEnvironment() # make sure we have a consistent environment 1476 try: 1477 pipe = Pipe(fields, ignoreStderr=ignoreStderr) 1478 except OSError: 1479 # On some platforms (i.e. Cygwin) this intermittently fails the first time we do it. 1480 # So, we attempt it a second time and if that works, we just go on as usual. 1481 # The problem appears to be that we sometimes get a bad stderr file descriptor. 1482 pipe = Pipe(fields, ignoreStderr=ignoreStderr) 1483 while True: 1484 line = pipe.stdout.readline() 1485 if not line: break 1486 if returnOutput: output.append(line) 1487 if outputFile is not None: outputFile.write(line) 1488 if not doNotLog: outputLogger.info(line[:-1]) # this way the log will (hopefully) get updated in realtime 1489 if outputFile is not None: 1490 try: # note, not every file-like object can be flushed 1491 outputFile.flush() 1492 except: pass 1493 if returnOutput: 1494 return (pipe.wait(), output) 1495 else: 1496 return (pipe.wait(), None) 1497 except OSError, e: 1498 try: 1499 if returnOutput: 1500 if output != []: 1501 return (pipe.wait(), output) 1502 else: 1503 return (pipe.wait(), [ e, ]) 1504 else: 1505 return (pipe.wait(), None) 1506 except UnboundLocalError: # pipe not set 1507 if returnOutput: 1508 return (256, []) 1509 else: 1510 return (256, None)
    1511
    1512 1513 ############################## 1514 # calculateFileAge() function 1515 ############################## 1516 1517 -def calculateFileAge(path):
    1518 """ 1519 Calculates the age (in days) of a file. 1520 1521 The "age" of a file is the amount of time since the file was last used, per 1522 the most recent of the file's C{st_atime} and C{st_mtime} values. 1523 1524 Technically, we only intend this function to work with files, but it will 1525 probably work with anything on the filesystem. 1526 1527 @param path: Path to a file on disk. 1528 1529 @return: Age of the file in days (possibly fractional). 1530 @raise OSError: If the file doesn't exist. 1531 """ 1532 currentTime = int(time.time()) 1533 fileStats = os.stat(path) 1534 lastUse = max(fileStats.st_atime, fileStats.st_mtime) # "most recent" is "largest" 1535 ageInSeconds = currentTime - lastUse 1536 ageInDays = ageInSeconds / SECONDS_PER_DAY 1537 return ageInDays
    1538
    1539 1540 ################### 1541 # mount() function 1542 ################### 1543 1544 -def mount(devicePath, mountPoint, fsType):
    1545 """ 1546 Mounts the indicated device at the indicated mount point. 1547 1548 For instance, to mount a CD, you might use device path C{/dev/cdrw}, mount 1549 point C{/media/cdrw} and filesystem type C{iso9660}. You can safely use any 1550 filesystem type that is supported by C{mount} on your platform. If the type 1551 is C{None}, we'll attempt to let C{mount} auto-detect it. This may or may 1552 not work on all systems. 1553 1554 @note: This only works on platforms that have a concept of "mounting" a 1555 filesystem through a command-line C{"mount"} command, like UNIXes. It 1556 won't work on Windows. 1557 1558 @param devicePath: Path of device to be mounted. 1559 @param mountPoint: Path that device should be mounted at. 1560 @param fsType: Type of the filesystem assumed to be available via the device. 1561 1562 @raise IOError: If the device cannot be mounted. 1563 """ 1564 if fsType is None: 1565 args = [ devicePath, mountPoint ] 1566 else: 1567 args = [ "-t", fsType, devicePath, mountPoint ] 1568 command = resolveCommand(MOUNT_COMMAND) 1569 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True)[0] 1570 if result != 0: 1571 raise IOError("Error [%d] mounting [%s] at [%s] as [%s]." % (result, devicePath, mountPoint, fsType))
    1572
    1573 1574 ##################### 1575 # unmount() function 1576 ##################### 1577 1578 -def unmount(mountPoint, removeAfter=False, attempts=1, waitSeconds=0):
    1579 """ 1580 Unmounts whatever device is mounted at the indicated mount point. 1581 1582 Sometimes, it might not be possible to unmount the mount point immediately, 1583 if there are still files open there. Use the C{attempts} and C{waitSeconds} 1584 arguments to indicate how many unmount attempts to make and how many seconds 1585 to wait between attempts. If you pass in zero attempts, no attempts will be 1586 made (duh). 1587 1588 If the indicated mount point is not really a mount point per 1589 C{os.path.ismount()}, then it will be ignored. This seems to be a safer 1590 check then looking through C{/etc/mtab}, since C{ismount()} is already in 1591 the Python standard library and is documented as working on all POSIX 1592 systems. 1593 1594 If C{removeAfter} is C{True}, then the mount point will be removed using 1595 C{os.rmdir()} after the unmount action succeeds. If for some reason the 1596 mount point is not a directory, then it will not be removed. 1597 1598 @note: This only works on platforms that have a concept of "mounting" a 1599 filesystem through a command-line C{"mount"} command, like UNIXes. It 1600 won't work on Windows. 1601 1602 @param mountPoint: Mount point to be unmounted. 1603 @param removeAfter: Remove the mount point after unmounting it. 1604 @param attempts: Number of times to attempt the unmount. 1605 @param waitSeconds: Number of seconds to wait between repeated attempts. 1606 1607 @raise IOError: If the mount point is still mounted after attempts are exhausted. 1608 """ 1609 if os.path.ismount(mountPoint): 1610 for attempt in range(0, attempts): 1611 logger.debug("Making attempt %d to unmount [%s].", attempt, mountPoint) 1612 command = resolveCommand(UMOUNT_COMMAND) 1613 result = executeCommand(command, [ mountPoint, ], returnOutput=False, ignoreStderr=True)[0] 1614 if result != 0: 1615 logger.error("Error [%d] unmounting [%s] on attempt %d.", result, mountPoint, attempt) 1616 elif os.path.ismount(mountPoint): 1617 logger.error("After attempt %d, [%s] is still mounted.", attempt, mountPoint) 1618 else: 1619 logger.debug("Successfully unmounted [%s] on attempt %d.", mountPoint, attempt) 1620 break # this will cause us to skip the loop else: clause 1621 if attempt+1 < attempts: # i.e. this isn't the last attempt 1622 if waitSeconds > 0: 1623 logger.info("Sleeping %d second(s) before next unmount attempt.", waitSeconds) 1624 time.sleep(waitSeconds) 1625 else: 1626 if os.path.ismount(mountPoint): 1627 raise IOError("Unable to unmount [%s] after %d attempts." % (mountPoint, attempts)) 1628 logger.info("Mount point [%s] seems to have finally gone away.", mountPoint) 1629 if os.path.isdir(mountPoint) and removeAfter: 1630 logger.debug("Removing mount point [%s].", mountPoint) 1631 os.rmdir(mountPoint)
    1632
    1633 1634 ########################### 1635 # deviceMounted() function 1636 ########################### 1637 1638 -def deviceMounted(devicePath):
    1639 """ 1640 Indicates whether a specific filesystem device is currently mounted. 1641 1642 We determine whether the device is mounted by looking through the system's 1643 C{mtab} file. This file shows every currently-mounted filesystem, ordered 1644 by device. We only do the check if the C{mtab} file exists and is readable. 1645 Otherwise, we assume that the device is not mounted. 1646 1647 @note: This only works on platforms that have a concept of an mtab file 1648 to show mounted volumes, like UNIXes. It won't work on Windows. 1649 1650 @param devicePath: Path of device to be checked 1651 1652 @return: True if device is mounted, false otherwise. 1653 """ 1654 if os.path.exists(MTAB_FILE) and os.access(MTAB_FILE, os.R_OK): 1655 realPath = os.path.realpath(devicePath) 1656 lines = open(MTAB_FILE).readlines() 1657 for line in lines: 1658 (mountDevice, mountPoint, remainder) = line.split(None, 2) 1659 if mountDevice in [ devicePath, realPath, ]: 1660 logger.debug("Device [%s] is mounted at [%s].", devicePath, mountPoint) 1661 return True 1662 return False
    1663
    1664 1665 ######################## 1666 # encodePath() function 1667 ######################## 1668 1669 -def encodePath(path):
    1670 1671 r""" 1672 Safely encodes a filesystem path. 1673 1674 Many Python filesystem functions, such as C{os.listdir}, behave differently 1675 if they are passed unicode arguments versus simple string arguments. For 1676 instance, C{os.listdir} generally returns unicode path names if it is passed 1677 a unicode argument, and string pathnames if it is passed a string argument. 1678 1679 However, this behavior often isn't as consistent as we might like. As an example, 1680 C{os.listdir} "gives up" if it finds a filename that it can't properly encode 1681 given the current locale settings. This means that the returned list is 1682 a mixed set of unicode and simple string paths. This has consequences later, 1683 because other filesystem functions like C{os.path.join} will blow up if they 1684 are given one string path and one unicode path. 1685 1686 On comp.lang.python, Martin v. Löwis explained the C{os.listdir} behavior 1687 like this:: 1688 1689 The operating system (POSIX) does not have the inherent notion that file 1690 names are character strings. Instead, in POSIX, file names are primarily 1691 byte strings. There are some bytes which are interpreted as characters 1692 (e.g. '\x2e', which is '.', or '\x2f', which is '/'), but apart from 1693 that, most OS layers think these are just bytes. 1694 1695 Now, most *people* think that file names are character strings. To 1696 interpret a file name as a character string, you need to know what the 1697 encoding is to interpret the file names (which are byte strings) as 1698 character strings. 1699 1700 There is, unfortunately, no operating system API to carry the notion of a 1701 file system encoding. By convention, the locale settings should be used 1702 to establish this encoding, in particular the LC_CTYPE facet of the 1703 locale. This is defined in the environment variables LC_CTYPE, LC_ALL, 1704 and LANG (searched in this order). 1705 1706 If LANG is not set, the "C" locale is assumed, which uses ASCII as its 1707 file system encoding. In this locale, '\xe2\x99\xaa\xe2\x99\xac' is not a 1708 valid file name (at least it cannot be interpreted as characters, and 1709 hence not be converted to Unicode). 1710 1711 Now, your Python script has requested that all file names *should* be 1712 returned as character (ie. Unicode) strings, but Python cannot comply, 1713 since there is no way to find out what this byte string means, in terms 1714 of characters. 1715 1716 So we have three options: 1717 1718 1. Skip this string, only return the ones that can be converted to Unicode. 1719 Give the user the impression the file does not exist. 1720 2. Return the string as a byte string 1721 3. Refuse to listdir altogether, raising an exception (i.e. return nothing) 1722 1723 Python has chosen alternative 2, allowing the application to implement 1 1724 or 3 on top of that if it wants to (or come up with other strategies, 1725 such as user feedback). 1726 1727 As a solution, he suggests that rather than passing unicode paths into the 1728 filesystem functions, that I should sensibly encode the path first. That is 1729 what this function accomplishes. Any function which takes a filesystem path 1730 as an argument should encode it first, before using it for any other purpose. 1731 1732 I confess I still don't completely understand how this works. On a system 1733 with filesystem encoding "ISO-8859-1", a path C{u"\xe2\x99\xaa\xe2\x99\xac"} 1734 is converted into the string C{"\xe2\x99\xaa\xe2\x99\xac"}. However, on a 1735 system with a "utf-8" encoding, the result is a completely different string: 1736 C{"\xc3\xa2\xc2\x99\xc2\xaa\xc3\xa2\xc2\x99\xc2\xac"}. A quick test where I 1737 write to the first filename and open the second proves that the two strings 1738 represent the same file on disk, which is all I really care about. 1739 1740 @note: As a special case, if C{path} is C{None}, then this function will 1741 return C{None}. 1742 1743 @note: To provide several examples of encoding values, my Debian sarge box 1744 with an ext3 filesystem has Python filesystem encoding C{ISO-8859-1}. User 1745 Anarcat's Debian box with a xfs filesystem has filesystem encoding 1746 C{ANSI_X3.4-1968}. Both my iBook G4 running Mac OS X 10.4 and user Dag 1747 Rende's SuSE 9.3 box both have filesystem encoding C{UTF-8}. 1748 1749 @note: Just because a filesystem has C{UTF-8} encoding doesn't mean that it 1750 will be able to handle all extended-character filenames. For instance, 1751 certain extended-character (but not UTF-8) filenames -- like the ones in the 1752 regression test tar file C{test/data/tree13.tar.gz} -- are not valid under 1753 Mac OS X, and it's not even possible to extract them from the tarfile on 1754 that platform. 1755 1756 @param path: Path to encode 1757 1758 @return: Path, as a string, encoded appropriately 1759 @raise ValueError: If the path cannot be encoded properly. 1760 """ 1761 if path is None: 1762 return path 1763 try: 1764 if isinstance(path, unicode): 1765 encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() 1766 path = path.encode(encoding) 1767 return path 1768 except UnicodeError: 1769 raise ValueError("Path could not be safely encoded as %s." % encoding)
    1770
    1771 1772 ######################## 1773 # nullDevice() function 1774 ######################## 1775 1776 -def nullDevice():
    1777 """ 1778 Attempts to portably return the null device on this system. 1779 1780 The null device is something like C{/dev/null} on a UNIX system. The name 1781 varies on other platforms. 1782 """ 1783 return os.devnull
    1784
    1785 1786 ############################## 1787 # deriveDayOfWeek() function 1788 ############################## 1789 1790 -def deriveDayOfWeek(dayName):
    1791 """ 1792 Converts English day name to numeric day of week as from C{time.localtime}. 1793 1794 For instance, the day C{monday} would be converted to the number C{0}. 1795 1796 @param dayName: Day of week to convert 1797 @type dayName: string, i.e. C{"monday"}, C{"tuesday"}, etc. 1798 1799 @returns: Integer, where Monday is 0 and Sunday is 6; or -1 if no conversion is possible. 1800 """ 1801 if dayName.lower() == "monday": 1802 return 0 1803 elif dayName.lower() == "tuesday": 1804 return 1 1805 elif dayName.lower() == "wednesday": 1806 return 2 1807 elif dayName.lower() == "thursday": 1808 return 3 1809 elif dayName.lower() == "friday": 1810 return 4 1811 elif dayName.lower() == "saturday": 1812 return 5 1813 elif dayName.lower() == "sunday": 1814 return 6 1815 else: 1816 return -1 # What else can we do?? Thrown an exception, I guess.
    1817
    1818 1819 ########################### 1820 # isStartOfWeek() function 1821 ########################### 1822 1823 -def isStartOfWeek(startingDay):
    1824 """ 1825 Indicates whether "today" is the backup starting day per configuration. 1826 1827 If the current day's English name matches the indicated starting day, then 1828 today is a starting day. 1829 1830 @param startingDay: Configured starting day. 1831 @type startingDay: string, i.e. C{"monday"}, C{"tuesday"}, etc. 1832 1833 @return: Boolean indicating whether today is the starting day. 1834 """ 1835 value = time.localtime().tm_wday == deriveDayOfWeek(startingDay) 1836 if value: 1837 logger.debug("Today is the start of the week.") 1838 else: 1839 logger.debug("Today is NOT the start of the week.") 1840 return value
    1841
    1842 1843 ################################# 1844 # buildNormalizedPath() function 1845 ################################# 1846 1847 -def buildNormalizedPath(path):
    1848 """ 1849 Returns a "normalized" path based on a path name. 1850 1851 A normalized path is a representation of a path that is also a valid file 1852 name. To make a valid file name out of a complete path, we have to convert 1853 or remove some characters that are significant to the filesystem -- in 1854 particular, the path separator and any leading C{'.'} character (which would 1855 cause the file to be hidden in a file listing). 1856 1857 Note that this is a one-way transformation -- you can't safely derive the 1858 original path from the normalized path. 1859 1860 To normalize a path, we begin by looking at the first character. If the 1861 first character is C{'/'} or C{'\\'}, it gets removed. If the first 1862 character is C{'.'}, it gets converted to C{'_'}. Then, we look through the 1863 rest of the path and convert all remaining C{'/'} or C{'\\'} characters 1864 C{'-'}, and all remaining whitespace characters to C{'_'}. 1865 1866 As a special case, a path consisting only of a single C{'/'} or C{'\\'} 1867 character will be converted to C{'-'}. 1868 1869 @param path: Path to normalize 1870 1871 @return: Normalized path as described above. 1872 1873 @raise ValueError: If the path is None 1874 """ 1875 if path is None: 1876 raise ValueError("Cannot normalize path None.") 1877 elif len(path) == 0: 1878 return path 1879 elif path == "/" or path == "\\": 1880 return "-" 1881 else: 1882 normalized = path 1883 normalized = re.sub(r"^\/", "", normalized) # remove leading '/' 1884 normalized = re.sub(r"^\\", "", normalized) # remove leading '\' 1885 normalized = re.sub(r"^\.", "_", normalized) # convert leading '.' to '_' so file won't be hidden 1886 normalized = re.sub(r"\/", "-", normalized) # convert all '/' characters to '-' 1887 normalized = re.sub(r"\\", "-", normalized) # convert all '\' characters to '-' 1888 normalized = re.sub(r"\s", "_", normalized) # convert all whitespace to '_' 1889 return normalized
    1890
    1891 1892 ################################# 1893 # sanitizeEnvironment() function 1894 ################################# 1895 1896 -def sanitizeEnvironment():
    1897 """ 1898 Sanitizes the operating system environment. 1899 1900 The operating system environment is contained in C{os.environ}. This method 1901 sanitizes the contents of that dictionary. 1902 1903 Currently, all it does is reset the locale (removing C{$LC_*}) and set the 1904 default language (C{$LANG}) to L{DEFAULT_LANGUAGE}. This way, we can count 1905 on consistent localization regardless of what the end-user has configured. 1906 This is important for code that needs to parse program output. 1907 1908 The C{os.environ} dictionary is modifed in-place. If C{$LANG} is already 1909 set to the proper value, it is not re-set, so we can avoid the memory leaks 1910 that are documented to occur on BSD-based systems. 1911 1912 @return: Copy of the sanitized environment. 1913 """ 1914 for var in LOCALE_VARS: 1915 if os.environ.has_key(var): 1916 del os.environ[var] 1917 if os.environ.has_key(LANG_VAR): 1918 if os.environ[LANG_VAR] != DEFAULT_LANGUAGE: # no need to reset if it exists (avoid leaks on BSD systems) 1919 os.environ[LANG_VAR] = DEFAULT_LANGUAGE 1920 return os.environ.copy()
    1921 1940
    1941 1942 ######################### 1943 # checkUnique() function 1944 ######################### 1945 1946 -def checkUnique(prefix, values):
    1947 """ 1948 Checks that all values are unique. 1949 1950 The values list is checked for duplicate values. If there are 1951 duplicates, an exception is thrown. All duplicate values are listed in 1952 the exception. 1953 1954 @param prefix: Prefix to use in the thrown exception 1955 @param values: List of values to check 1956 1957 @raise ValueError: If there are duplicates in the list 1958 """ 1959 values.sort() 1960 duplicates = [] 1961 for i in range(1, len(values)): 1962 if values[i-1] == values[i]: 1963 duplicates.append(values[i]) 1964 if duplicates: 1965 raise ValueError("%s %s" % (prefix, duplicates))
    1966
    1967 1968 ####################################### 1969 # parseCommaSeparatedString() function 1970 ####################################### 1971 1972 -def parseCommaSeparatedString(commaString):
    1973 """ 1974 Parses a list of values out of a comma-separated string. 1975 1976 The items in the list are split by comma, and then have whitespace 1977 stripped. As a special case, if C{commaString} is C{None}, then C{None} 1978 will be returned. 1979 1980 @param commaString: List of values in comma-separated string format. 1981 @return: Values from commaString split into a list, or C{None}. 1982 """ 1983 if commaString is None: 1984 return None 1985 else: 1986 pass1 = commaString.split(",") 1987 pass2 = [] 1988 for item in pass1: 1989 item = item.strip() 1990 if len(item) > 0: 1991 pass2.append(item) 1992 return pass2
    1993

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.release-pysrc.html0000664000175000017500000002534212642035645026171 0ustar pronovicpronovic00000000000000 CedarBackup2.release
    Package CedarBackup2 :: Module release
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.release

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Cedar Backup, release 2 
    14  # Purpose  : Provides location to maintain release information. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  """ 
    19  Provides location to maintain version information. 
    20   
    21  @sort: AUTHOR, EMAIL, COPYRIGHT, VERSION, DATE, URL 
    22   
    23  @var AUTHOR: Author of software. 
    24  @var EMAIL: Email address of author. 
    25  @var COPYRIGHT: Copyright date. 
    26  @var VERSION: Software version. 
    27  @var DATE: Software release date. 
    28  @var URL: URL of Cedar Backup webpage. 
    29   
    30  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    31  """ 
    32   
    33  AUTHOR      = "Kenneth J. Pronovici" 
    34  EMAIL       = "pronovic@ieee.org" 
    35  COPYRIGHT   = "2004-2011,2013-2016" 
    36  VERSION     = "2.26.5" 
    37  DATE        = "02 Jan 2016" 
    38  URL         = "https://bitbucket.org/cedarsolutions/cedar-backup2" 
    39   
    

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions.constants-module.html0000664000175000017500000000351312642035643031126 0ustar pronovicpronovic00000000000000 constants

    Module constants


    Variables

    COLLECT_INDICATOR
    DIGEST_EXTENSION
    DIR_TIME_FORMAT
    INDICATOR_PATTERN
    STAGE_INDICATOR
    STORE_INDICATOR
    __package__

    [hide private] CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend.amazons3-module.html0000664000175000017500000000625612642035643030503 0ustar pronovicpronovic00000000000000 amazons3

    Module amazons3


    Classes

    AmazonS3Config
    LocalConfig

    Functions

    executeAction

    Variables

    AWS_COMMAND
    STORE_INDICATOR
    SU_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/frames.html0000664000175000017500000000111512642035643022431 0ustar pronovicpronovic00000000000000 CedarBackup2 CedarBackup2-2.26.5/doc/interface/redirect.html0000664000175000017500000001256512642035647022774 0ustar pronovicpronovic00000000000000Epydoc Redirect Page

    Epydoc Auto-redirect page

    When javascript is enabled, this page will redirect URLs of the form redirect.html#dotted.name to the documentation for the object with the given fully-qualified dotted name.

     

    CedarBackup2-2.26.5/doc/interface/index.html0000664000175000017500000000111512642035647022267 0ustar pronovicpronovic00000000000000 CedarBackup2 CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.split.SplitConfig-class.html0000664000175000017500000005610412642035644031356 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.split.SplitConfig
    Package CedarBackup2 :: Package extend :: Module split :: Class SplitConfig
    [hide private]
    [frames] | no frames]

    Class SplitConfig

    source code

    object --+
             |
            SplitConfig
    

    Class representing split configuration.

    Split configuration is used for splitting staging directories.

    The following restrictions exist on data in this class:

    • The size limit must be a ByteQuantity
    • The split size must be a ByteQuantity
    Instance Methods [hide private]
     
    __init__(self, sizeLimit=None, splitSize=None)
    Constructor for the SplitCOnfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setSizeLimit(self, value)
    Property target used to set the size limit.
    source code
     
    _getSizeLimit(self)
    Property target used to get the size limit.
    source code
     
    _setSplitSize(self, value)
    Property target used to set the split size.
    source code
     
    _getSplitSize(self)
    Property target used to get the split size.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      sizeLimit
    Size limit, as a ByteQuantity
      splitSize
    Split size, as a ByteQuantity

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, sizeLimit=None, splitSize=None)
    (Constructor)

    source code 

    Constructor for the SplitCOnfig class.

    Parameters:
    • sizeLimit - Size limit of the files, in bytes
    • splitSize - Size that files exceeding the limit will be split into, in bytes
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setSizeLimit(self, value)

    source code 

    Property target used to set the size limit. If not None, the value must be a ByteQuantity object.

    Raises:
    • ValueError - If the value is not a ByteQuantity

    _setSplitSize(self, value)

    source code 

    Property target used to set the split size. If not None, the value must be a ByteQuantity object.

    Raises:
    • ValueError - If the value is not a ByteQuantity

    Property Details [hide private]

    sizeLimit

    Size limit, as a ByteQuantity

    Get Method:
    _getSizeLimit(self) - Property target used to get the size limit.
    Set Method:
    _setSizeLimit(self, value) - Property target used to set the size limit.

    splitSize

    Split size, as a ByteQuantity

    Get Method:
    _getSplitSize(self) - Property target used to get the split size.
    Set Method:
    _setSplitSize(self, value) - Property target used to set the split size.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mysql.MysqlConfig-class.html0000664000175000017500000010167512642035644031406 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mysql.MysqlConfig
    Package CedarBackup2 :: Package extend :: Module mysql :: Class MysqlConfig
    [hide private]
    [frames] | no frames]

    Class MysqlConfig

    source code

    object --+
             |
            MysqlConfig
    

    Class representing MySQL configuration.

    The MySQL configuration information is used for backing up MySQL databases.

    The following restrictions exist on data in this class:

    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The 'all' flag must be 'Y' if no databases are defined.
    • The 'all' flag must be 'N' if any databases are defined.
    • Any values in the databases list must be strings.
    Instance Methods [hide private]
     
    __init__(self, user=None, password=None, compressMode=None, all=None, databases=None)
    Constructor for the MysqlConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setUser(self, value)
    Property target used to set the user value.
    source code
     
    _getUser(self)
    Property target used to get the user value.
    source code
     
    _setPassword(self, value)
    Property target used to set the password value.
    source code
     
    _getPassword(self)
    Property target used to get the password value.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setAll(self, value)
    Property target used to set the 'all' flag.
    source code
     
    _getAll(self)
    Property target used to get the 'all' flag.
    source code
     
    _setDatabases(self, value)
    Property target used to set the databases list.
    source code
     
    _getDatabases(self)
    Property target used to get the databases list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      user
    User to execute backup as.
      password
    Password associated with user.
      all
    Indicates whether to back up all databases.
      databases
    List of databases to back up.
      compressMode
    Compress mode to be used for backed-up files.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, user=None, password=None, compressMode=None, all=None, databases=None)
    (Constructor)

    source code 

    Constructor for the MysqlConfig class.

    Parameters:
    • user - User to execute backup as.
    • password - Password associated with user.
    • compressMode - Compress mode for backed-up files.
    • all - Indicates whether to back up all databases.
    • databases - List of databases to back up.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setAll(self, value)

    source code 

    Property target used to set the 'all' flag. No validations, but we normalize the value to True or False.

    _setDatabases(self, value)

    source code 

    Property target used to set the databases list. Either the value must be None or each element must be a string.

    Raises:
    • ValueError - If the value is not a string.

    Property Details [hide private]

    user

    User to execute backup as.

    Get Method:
    _getUser(self) - Property target used to get the user value.
    Set Method:
    _setUser(self, value) - Property target used to set the user value.

    password

    Password associated with user.

    Get Method:
    _getPassword(self) - Property target used to get the password value.
    Set Method:
    _setPassword(self, value) - Property target used to set the password value.

    all

    Indicates whether to back up all databases.

    Get Method:
    _getAll(self) - Property target used to get the 'all' flag.
    Set Method:
    _setAll(self, value) - Property target used to set the 'all' flag.

    databases

    List of databases to back up.

    Get Method:
    _getDatabases(self) - Property target used to get the databases list.
    Set Method:
    _setDatabases(self, value) - Property target used to set the databases list.

    compressMode

    Compress mode to be used for backed-up files.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.postgresql.LocalConfig-class.html0000664000175000017500000007612512642035644032372 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.postgresql.LocalConfig
    Package CedarBackup2 :: Package extend :: Module postgresql :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit PostgreSQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <postgresql> configuration section as the next child of a parent.
    source code
     
    _setPostgresql(self, value)
    Property target used to set the postgresql configuration value.
    source code
     
    _getPostgresql(self)
    Property target used to get the postgresql configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parsePostgresql(parent)
    Parses a postgresql configuration section.
    source code
    Properties [hide private]
      postgresql
    Postgresql configuration in terms of a PostgresqlConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    The compress mode must be filled in. Then, if the 'all' flag is set, no databases are allowed, and if the 'all' flag is not set, at least one database is required.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <postgresql> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      user           //cb_config/postgresql/user
      compressMode   //cb_config/postgresql/compress_mode
      all            //cb_config/postgresql/all
    

    We also add groups of the following items, one list element per item:

      database       //cb_config/postgresql/database
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setPostgresql(self, value)

    source code 

    Property target used to set the postgresql configuration value. If not None, the value must be a PostgresqlConfig object.

    Raises:
    • ValueError - If the value is not a PostgresqlConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the postgresql configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parsePostgresql(parent)
    Static Method

    source code 

    Parses a postgresql configuration section.

    We read the following fields:

      user           //cb_config/postgresql/user
      compressMode   //cb_config/postgresql/compress_mode
      all            //cb_config/postgresql/all
    

    We also read groups of the following item, one list element per item:

      databases      //cb_config/postgresql/database
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    PostgresqlConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    postgresql

    Postgresql configuration in terms of a PostgresqlConfig object.

    Get Method:
    _getPostgresql(self) - Property target used to get the postgresql configuration value.
    Set Method:
    _setPostgresql(self, value) - Property target used to set the postgresql configuration value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.peer-pysrc.html0000664000175000017500000130025612642035645025505 0ustar pronovicpronovic00000000000000 CedarBackup2.peer
    Package CedarBackup2 :: Module peer
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.peer

       1  # -*- coding: iso-8859-1 -*- 
       2  # vim: set ft=python ts=3 sw=3 expandtab: 
       3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
       4  # 
       5  #              C E D A R 
       6  #          S O L U T I O N S       "Software done right." 
       7  #           S O F T W A R E 
       8  # 
       9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      10  # 
      11  # Copyright (c) 2004-2008,2010 Kenneth J. Pronovici. 
      12  # All rights reserved. 
      13  # 
      14  # This program is free software; you can redistribute it and/or 
      15  # modify it under the terms of the GNU General Public License, 
      16  # Version 2, as published by the Free Software Foundation. 
      17  # 
      18  # This program is distributed in the hope that it will be useful, 
      19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
      20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
      21  # 
      22  # Copies of the GNU General Public License are available from 
      23  # the Free Software Foundation website, http://www.gnu.org/. 
      24  # 
      25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      26  # 
      27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
      28  # Language : Python 2 (>= 2.7) 
      29  # Project  : Cedar Backup, release 2 
      30  # Purpose  : Provides backup peer-related objects. 
      31  # 
      32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      33   
      34  ######################################################################## 
      35  # Module documentation 
      36  ######################################################################## 
      37   
      38  """ 
      39  Provides backup peer-related objects and utility functions. 
      40   
      41  @sort: LocalPeer, RemotePeer 
      42   
      43  @var DEF_COLLECT_INDICATOR: Name of the default collect indicator file. 
      44  @var DEF_STAGE_INDICATOR: Name of the default stage indicator file. 
      45   
      46  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
      47  """ 
      48   
      49   
      50  ######################################################################## 
      51  # Imported modules 
      52  ######################################################################## 
      53   
      54  # System modules 
      55  import os 
      56  import logging 
      57  import shutil 
      58   
      59  # Cedar Backup modules 
      60  from CedarBackup2.filesystem import FilesystemList 
      61  from CedarBackup2.util import resolveCommand, executeCommand, isRunningAsRoot 
      62  from CedarBackup2.util import splitCommandLine, encodePath 
      63  from CedarBackup2.config import VALID_FAILURE_MODES 
      64   
      65   
      66  ######################################################################## 
      67  # Module-wide constants and variables 
      68  ######################################################################## 
      69   
      70  logger                  = logging.getLogger("CedarBackup2.log.peer") 
      71   
      72  DEF_RCP_COMMAND         = [ "/usr/bin/scp", "-B", "-q", "-C" ] 
      73  DEF_RSH_COMMAND         = [ "/usr/bin/ssh", ] 
      74  DEF_CBACK_COMMAND       = "/usr/bin/cback" 
      75   
      76  DEF_COLLECT_INDICATOR   = "cback.collect" 
      77  DEF_STAGE_INDICATOR     = "cback.stage" 
      78   
      79  SU_COMMAND              = [ "su" ] 
    
    80 81 82 ######################################################################## 83 # LocalPeer class definition 84 ######################################################################## 85 86 -class LocalPeer(object):
    87 88 ###################### 89 # Class documentation 90 ###################### 91 92 """ 93 Backup peer representing a local peer in a backup pool. 94 95 This is a class representing a local (non-network) peer in a backup pool. 96 Local peers are backed up by simple filesystem copy operations. A local 97 peer has associated with it a name (typically, but not necessarily, a 98 hostname) and a collect directory. 99 100 The public methods other than the constructor are part of a "backup peer" 101 interface shared with the C{RemotePeer} class. 102 103 @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, 104 _copyLocalDir, _copyLocalFile, name, collectDir 105 """ 106 107 ############## 108 # Constructor 109 ############## 110
    111 - def __init__(self, name, collectDir, ignoreFailureMode=None):
    112 """ 113 Initializes a local backup peer. 114 115 Note that the collect directory must be an absolute path, but does not 116 have to exist when the object is instantiated. We do a lazy validation 117 on this value since we could (potentially) be creating peer objects 118 before an ongoing backup completed. 119 120 @param name: Name of the backup peer 121 @type name: String, typically a hostname 122 123 @param collectDir: Path to the peer's collect directory 124 @type collectDir: String representing an absolute local path on disk 125 126 @param ignoreFailureMode: Ignore failure mode for this peer 127 @type ignoreFailureMode: One of VALID_FAILURE_MODES 128 129 @raise ValueError: If the name is empty. 130 @raise ValueError: If collect directory is not an absolute path. 131 """ 132 self._name = None 133 self._collectDir = None 134 self._ignoreFailureMode = None 135 self.name = name 136 self.collectDir = collectDir 137 self.ignoreFailureMode = ignoreFailureMode
    138 139 140 ############# 141 # Properties 142 ############# 143
    144 - def _setName(self, value):
    145 """ 146 Property target used to set the peer name. 147 The value must be a non-empty string and cannot be C{None}. 148 @raise ValueError: If the value is an empty string or C{None}. 149 """ 150 if value is None or len(value) < 1: 151 raise ValueError("Peer name must be a non-empty string.") 152 self._name = value
    153
    154 - def _getName(self):
    155 """ 156 Property target used to get the peer name. 157 """ 158 return self._name
    159
    160 - def _setCollectDir(self, value):
    161 """ 162 Property target used to set the collect directory. 163 The value must be an absolute path and cannot be C{None}. 164 It does not have to exist on disk at the time of assignment. 165 @raise ValueError: If the value is C{None} or is not an absolute path. 166 @raise ValueError: If a path cannot be encoded properly. 167 """ 168 if value is None or not os.path.isabs(value): 169 raise ValueError("Collect directory must be an absolute path.") 170 self._collectDir = encodePath(value)
    171
    172 - def _getCollectDir(self):
    173 """ 174 Property target used to get the collect directory. 175 """ 176 return self._collectDir
    177
    178 - def _setIgnoreFailureMode(self, value):
    179 """ 180 Property target used to set the ignoreFailure mode. 181 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 182 @raise ValueError: If the value is not valid. 183 """ 184 if value is not None: 185 if value not in VALID_FAILURE_MODES: 186 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 187 self._ignoreFailureMode = value
    188
    189 - def _getIgnoreFailureMode(self):
    190 """ 191 Property target used to get the ignoreFailure mode. 192 """ 193 return self._ignoreFailureMode
    194 195 name = property(_getName, _setName, None, "Name of the peer.") 196 collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") 197 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") 198 199 200 ################# 201 # Public methods 202 ################# 203
    204 - def stagePeer(self, targetDir, ownership=None, permissions=None):
    205 """ 206 Stages data from the peer into the indicated local target directory. 207 208 The collect and target directories must both already exist before this 209 method is called. If passed in, ownership and permissions will be 210 applied to the files that are copied. 211 212 @note: The caller is responsible for checking that the indicator exists, 213 if they care. This function only stages the files within the directory. 214 215 @note: If you have user/group as strings, call the L{util.getUidGid} function 216 to get the associated uid/gid as an ownership tuple. 217 218 @param targetDir: Target directory to write data into 219 @type targetDir: String representing a directory on disk 220 221 @param ownership: Owner and group that the staged files should have 222 @type ownership: Tuple of numeric ids C{(uid, gid)} 223 224 @param permissions: Permissions that the staged files should have 225 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 226 227 @return: Number of files copied from the source directory to the target directory. 228 229 @raise ValueError: If collect directory is not a directory or does not exist 230 @raise ValueError: If target directory is not a directory, does not exist or is not absolute. 231 @raise ValueError: If a path cannot be encoded properly. 232 @raise IOError: If there were no files to stage (i.e. the directory was empty) 233 @raise IOError: If there is an IO error copying a file. 234 @raise OSError: If there is an OS error copying or changing permissions on a file 235 """ 236 targetDir = encodePath(targetDir) 237 if not os.path.isabs(targetDir): 238 logger.debug("Target directory [%s] not an absolute path.", targetDir) 239 raise ValueError("Target directory must be an absolute path.") 240 if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): 241 logger.debug("Collect directory [%s] is not a directory or does not exist on disk.", self.collectDir) 242 raise ValueError("Collect directory is not a directory or does not exist on disk.") 243 if not os.path.exists(targetDir) or not os.path.isdir(targetDir): 244 logger.debug("Target directory [%s] is not a directory or does not exist on disk.", targetDir) 245 raise ValueError("Target directory is not a directory or does not exist on disk.") 246 count = LocalPeer._copyLocalDir(self.collectDir, targetDir, ownership, permissions) 247 if count == 0: 248 raise IOError("Did not copy any files from local peer.") 249 return count
    250
    251 - def checkCollectIndicator(self, collectIndicator=None):
    252 """ 253 Checks the collect indicator in the peer's staging directory. 254 255 When a peer has completed collecting its backup files, it will write an 256 empty indicator file into its collect directory. This method checks to 257 see whether that indicator has been written. We're "stupid" here - if 258 the collect directory doesn't exist, you'll naturally get back C{False}. 259 260 If you need to, you can override the name of the collect indicator file 261 by passing in a different name. 262 263 @param collectIndicator: Name of the collect indicator file to check 264 @type collectIndicator: String representing name of a file in the collect directory 265 266 @return: Boolean true/false depending on whether the indicator exists. 267 @raise ValueError: If a path cannot be encoded properly. 268 """ 269 collectIndicator = encodePath(collectIndicator) 270 if collectIndicator is None: 271 return os.path.exists(os.path.join(self.collectDir, DEF_COLLECT_INDICATOR)) 272 else: 273 return os.path.exists(os.path.join(self.collectDir, collectIndicator))
    274
    275 - def writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None):
    276 """ 277 Writes the stage indicator in the peer's staging directory. 278 279 When the master has completed collecting its backup files, it will write 280 an empty indicator file into the peer's collect directory. The presence 281 of this file implies that the staging process is complete. 282 283 If you need to, you can override the name of the stage indicator file by 284 passing in a different name. 285 286 @note: If you have user/group as strings, call the L{util.getUidGid} 287 function to get the associated uid/gid as an ownership tuple. 288 289 @param stageIndicator: Name of the indicator file to write 290 @type stageIndicator: String representing name of a file in the collect directory 291 292 @param ownership: Owner and group that the indicator file should have 293 @type ownership: Tuple of numeric ids C{(uid, gid)} 294 295 @param permissions: Permissions that the indicator file should have 296 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 297 298 @raise ValueError: If collect directory is not a directory or does not exist 299 @raise ValueError: If a path cannot be encoded properly. 300 @raise IOError: If there is an IO error creating the file. 301 @raise OSError: If there is an OS error creating or changing permissions on the file 302 """ 303 stageIndicator = encodePath(stageIndicator) 304 if not os.path.exists(self.collectDir) or not os.path.isdir(self.collectDir): 305 logger.debug("Collect directory [%s] is not a directory or does not exist on disk.", self.collectDir) 306 raise ValueError("Collect directory is not a directory or does not exist on disk.") 307 if stageIndicator is None: 308 fileName = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) 309 else: 310 fileName = os.path.join(self.collectDir, stageIndicator) 311 LocalPeer._copyLocalFile(None, fileName, ownership, permissions) # None for sourceFile results in an empty target
    312 313 314 ################## 315 # Private methods 316 ################## 317 318 @staticmethod
    319 - def _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None):
    320 """ 321 Copies files from the source directory to the target directory. 322 323 This function is not recursive. Only the files in the directory will be 324 copied. Ownership and permissions will be left at their default values 325 if new values are not specified. The source and target directories are 326 allowed to be soft links to a directory, but besides that soft links are 327 ignored. 328 329 @note: If you have user/group as strings, call the L{util.getUidGid} 330 function to get the associated uid/gid as an ownership tuple. 331 332 @param sourceDir: Source directory 333 @type sourceDir: String representing a directory on disk 334 335 @param targetDir: Target directory 336 @type targetDir: String representing a directory on disk 337 338 @param ownership: Owner and group that the copied files should have 339 @type ownership: Tuple of numeric ids C{(uid, gid)} 340 341 @param permissions: Permissions that the staged files should have 342 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 343 344 @return: Number of files copied from the source directory to the target directory. 345 346 @raise ValueError: If source or target is not a directory or does not exist. 347 @raise ValueError: If a path cannot be encoded properly. 348 @raise IOError: If there is an IO error copying the files. 349 @raise OSError: If there is an OS error copying or changing permissions on a files 350 """ 351 filesCopied = 0 352 sourceDir = encodePath(sourceDir) 353 targetDir = encodePath(targetDir) 354 for fileName in os.listdir(sourceDir): 355 sourceFile = os.path.join(sourceDir, fileName) 356 targetFile = os.path.join(targetDir, fileName) 357 LocalPeer._copyLocalFile(sourceFile, targetFile, ownership, permissions) 358 filesCopied += 1 359 return filesCopied
    360 361 @staticmethod
    362 - def _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True):
    363 """ 364 Copies a source file to a target file. 365 366 If the source file is C{None} then the target file will be created or 367 overwritten as an empty file. If the target file is C{None}, this method 368 is a no-op. Attempting to copy a soft link or a directory will result in 369 an exception. 370 371 @note: If you have user/group as strings, call the L{util.getUidGid} 372 function to get the associated uid/gid as an ownership tuple. 373 374 @note: We will not overwrite a target file that exists when this method 375 is invoked. If the target already exists, we'll raise an exception. 376 377 @param sourceFile: Source file to copy 378 @type sourceFile: String representing a file on disk, as an absolute path 379 380 @param targetFile: Target file to create 381 @type targetFile: String representing a file on disk, as an absolute path 382 383 @param ownership: Owner and group that the copied should have 384 @type ownership: Tuple of numeric ids C{(uid, gid)} 385 386 @param permissions: Permissions that the staged files should have 387 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 388 389 @param overwrite: Indicates whether it's OK to overwrite the target file. 390 @type overwrite: Boolean true/false. 391 392 @raise ValueError: If the passed-in source file is not a regular file. 393 @raise ValueError: If a path cannot be encoded properly. 394 @raise IOError: If the target file already exists. 395 @raise IOError: If there is an IO error copying the file 396 @raise OSError: If there is an OS error copying or changing permissions on a file 397 """ 398 targetFile = encodePath(targetFile) 399 sourceFile = encodePath(sourceFile) 400 if targetFile is None: 401 return 402 if not overwrite: 403 if os.path.exists(targetFile): 404 raise IOError("Target file [%s] already exists." % targetFile) 405 if sourceFile is None: 406 open(targetFile, "w").write("") 407 else: 408 if os.path.isfile(sourceFile) and not os.path.islink(sourceFile): 409 shutil.copy(sourceFile, targetFile) 410 else: 411 logger.debug("Source [%s] is not a regular file.", sourceFile) 412 raise ValueError("Source is not a regular file.") 413 if ownership is not None: 414 os.chown(targetFile, ownership[0], ownership[1]) 415 if permissions is not None: 416 os.chmod(targetFile, permissions)
    417
    418 419 ######################################################################## 420 # RemotePeer class definition 421 ######################################################################## 422 423 -class RemotePeer(object):
    424 425 ###################### 426 # Class documentation 427 ###################### 428 429 """ 430 Backup peer representing a remote peer in a backup pool. 431 432 This is a class representing a remote (networked) peer in a backup pool. 433 Remote peers are backed up using an rcp-compatible copy command. A remote 434 peer has associated with it a name (which must be a valid hostname), a 435 collect directory, a working directory and a copy method (an rcp-compatible 436 command). 437 438 You can also set an optional local user value. This username will be used 439 as the local user for any remote copies that are required. It can only be 440 used if the root user is executing the backup. The root user will C{su} to 441 the local user and execute the remote copies as that user. 442 443 The copy method is associated with the peer and not with the actual request 444 to copy, because we can envision that each remote host might have a 445 different connect method. 446 447 The public methods other than the constructor are part of a "backup peer" 448 interface shared with the C{LocalPeer} class. 449 450 @sort: __init__, stagePeer, checkCollectIndicator, writeStageIndicator, 451 executeRemoteCommand, executeManagedAction, _getDirContents, 452 _copyRemoteDir, _copyRemoteFile, _pushLocalFile, name, collectDir, 453 remoteUser, rcpCommand, rshCommand, cbackCommand 454 """ 455 456 ############## 457 # Constructor 458 ############## 459
    460 - def __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, 461 rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, 462 ignoreFailureMode=None):
    463 """ 464 Initializes a remote backup peer. 465 466 @note: If provided, each command will eventually be parsed into a list of 467 strings suitable for passing to C{util.executeCommand} in order to avoid 468 security holes related to shell interpolation. This parsing will be 469 done by the L{util.splitCommandLine} function. See the documentation for 470 that function for some important notes about its limitations. 471 472 @param name: Name of the backup peer 473 @type name: String, must be a valid DNS hostname 474 475 @param collectDir: Path to the peer's collect directory 476 @type collectDir: String representing an absolute path on the remote peer 477 478 @param workingDir: Working directory that can be used to create temporary files, etc. 479 @type workingDir: String representing an absolute path on the current host. 480 481 @param remoteUser: Name of the Cedar Backup user on the remote peer 482 @type remoteUser: String representing a username, valid via remote shell to the peer 483 484 @param localUser: Name of the Cedar Backup user on the current host 485 @type localUser: String representing a username, valid on the current host 486 487 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 488 @type rcpCommand: String representing a system command including required arguments 489 490 @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer 491 @type rshCommand: String representing a system command including required arguments 492 493 @param cbackCommand: A chack-compatible command to use for executing managed actions 494 @type cbackCommand: String representing a system command including required arguments 495 496 @param ignoreFailureMode: Ignore failure mode for this peer 497 @type ignoreFailureMode: One of VALID_FAILURE_MODES 498 499 @raise ValueError: If collect directory is not an absolute path 500 """ 501 self._name = None 502 self._collectDir = None 503 self._workingDir = None 504 self._remoteUser = None 505 self._localUser = None 506 self._rcpCommand = None 507 self._rcpCommandList = None 508 self._rshCommand = None 509 self._rshCommandList = None 510 self._cbackCommand = None 511 self._ignoreFailureMode = None 512 self.name = name 513 self.collectDir = collectDir 514 self.workingDir = workingDir 515 self.remoteUser = remoteUser 516 self.localUser = localUser 517 self.rcpCommand = rcpCommand 518 self.rshCommand = rshCommand 519 self.cbackCommand = cbackCommand 520 self.ignoreFailureMode = ignoreFailureMode
    521 522 523 ############# 524 # Properties 525 ############# 526
    527 - def _setName(self, value):
    528 """ 529 Property target used to set the peer name. 530 The value must be a non-empty string and cannot be C{None}. 531 @raise ValueError: If the value is an empty string or C{None}. 532 """ 533 if value is None or len(value) < 1: 534 raise ValueError("Peer name must be a non-empty string.") 535 self._name = value
    536
    537 - def _getName(self):
    538 """ 539 Property target used to get the peer name. 540 """ 541 return self._name
    542
    543 - def _setCollectDir(self, value):
    544 """ 545 Property target used to set the collect directory. 546 The value must be an absolute path and cannot be C{None}. 547 It does not have to exist on disk at the time of assignment. 548 @raise ValueError: If the value is C{None} or is not an absolute path. 549 @raise ValueError: If the value cannot be encoded properly. 550 """ 551 if value is not None: 552 if not os.path.isabs(value): 553 raise ValueError("Collect directory must be an absolute path.") 554 self._collectDir = encodePath(value)
    555
    556 - def _getCollectDir(self):
    557 """ 558 Property target used to get the collect directory. 559 """ 560 return self._collectDir
    561
    562 - def _setWorkingDir(self, value):
    563 """ 564 Property target used to set the working directory. 565 The value must be an absolute path and cannot be C{None}. 566 @raise ValueError: If the value is C{None} or is not an absolute path. 567 @raise ValueError: If the value cannot be encoded properly. 568 """ 569 if value is not None: 570 if not os.path.isabs(value): 571 raise ValueError("Working directory must be an absolute path.") 572 self._workingDir = encodePath(value)
    573
    574 - def _getWorkingDir(self):
    575 """ 576 Property target used to get the working directory. 577 """ 578 return self._workingDir
    579
    580 - def _setRemoteUser(self, value):
    581 """ 582 Property target used to set the remote user. 583 The value must be a non-empty string and cannot be C{None}. 584 @raise ValueError: If the value is an empty string or C{None}. 585 """ 586 if value is None or len(value) < 1: 587 raise ValueError("Peer remote user must be a non-empty string.") 588 self._remoteUser = value
    589
    590 - def _getRemoteUser(self):
    591 """ 592 Property target used to get the remote user. 593 """ 594 return self._remoteUser
    595
    596 - def _setLocalUser(self, value):
    597 """ 598 Property target used to set the local user. 599 The value must be a non-empty string if it is not C{None}. 600 @raise ValueError: If the value is an empty string. 601 """ 602 if value is not None: 603 if len(value) < 1: 604 raise ValueError("Peer local user must be a non-empty string.") 605 self._localUser = value
    606
    607 - def _getLocalUser(self):
    608 """ 609 Property target used to get the local user. 610 """ 611 return self._localUser
    612
    613 - def _setRcpCommand(self, value):
    614 """ 615 Property target to set the rcp command. 616 617 The value must be a non-empty string or C{None}. Its value is stored in 618 the two forms: "raw" as provided by the client, and "parsed" into a list 619 suitable for being passed to L{util.executeCommand} via 620 L{util.splitCommandLine}. 621 622 However, all the caller will ever see via the property is the actual 623 value they set (which includes seeing C{None}, even if we translate that 624 internally to C{DEF_RCP_COMMAND}). Internally, we should always use 625 C{self._rcpCommandList} if we want the actual command list. 626 627 @raise ValueError: If the value is an empty string. 628 """ 629 if value is None: 630 self._rcpCommand = None 631 self._rcpCommandList = DEF_RCP_COMMAND 632 else: 633 if len(value) >= 1: 634 self._rcpCommand = value 635 self._rcpCommandList = splitCommandLine(self._rcpCommand) 636 else: 637 raise ValueError("The rcp command must be a non-empty string.")
    638
    639 - def _getRcpCommand(self):
    640 """ 641 Property target used to get the rcp command. 642 """ 643 return self._rcpCommand
    644
    645 - def _setRshCommand(self, value):
    646 """ 647 Property target to set the rsh command. 648 649 The value must be a non-empty string or C{None}. Its value is stored in 650 the two forms: "raw" as provided by the client, and "parsed" into a list 651 suitable for being passed to L{util.executeCommand} via 652 L{util.splitCommandLine}. 653 654 However, all the caller will ever see via the property is the actual 655 value they set (which includes seeing C{None}, even if we translate that 656 internally to C{DEF_RSH_COMMAND}). Internally, we should always use 657 C{self._rshCommandList} if we want the actual command list. 658 659 @raise ValueError: If the value is an empty string. 660 """ 661 if value is None: 662 self._rshCommand = None 663 self._rshCommandList = DEF_RSH_COMMAND 664 else: 665 if len(value) >= 1: 666 self._rshCommand = value 667 self._rshCommandList = splitCommandLine(self._rshCommand) 668 else: 669 raise ValueError("The rsh command must be a non-empty string.")
    670
    671 - def _getRshCommand(self):
    672 """ 673 Property target used to get the rsh command. 674 """ 675 return self._rshCommand
    676
    677 - def _setCbackCommand(self, value):
    678 """ 679 Property target to set the cback command. 680 681 The value must be a non-empty string or C{None}. Unlike the other 682 command, this value is only stored in the "raw" form provided by the 683 client. 684 685 @raise ValueError: If the value is an empty string. 686 """ 687 if value is None: 688 self._cbackCommand = None 689 else: 690 if len(value) >= 1: 691 self._cbackCommand = value 692 else: 693 raise ValueError("The cback command must be a non-empty string.")
    694
    695 - def _getCbackCommand(self):
    696 """ 697 Property target used to get the cback command. 698 """ 699 return self._cbackCommand
    700
    701 - def _setIgnoreFailureMode(self, value):
    702 """ 703 Property target used to set the ignoreFailure mode. 704 If not C{None}, the mode must be one of the values in L{VALID_FAILURE_MODES}. 705 @raise ValueError: If the value is not valid. 706 """ 707 if value is not None: 708 if value not in VALID_FAILURE_MODES: 709 raise ValueError("Ignore failure mode must be one of %s." % VALID_FAILURE_MODES) 710 self._ignoreFailureMode = value
    711
    712 - def _getIgnoreFailureMode(self):
    713 """ 714 Property target used to get the ignoreFailure mode. 715 """ 716 return self._ignoreFailureMode
    717 718 name = property(_getName, _setName, None, "Name of the peer (a valid DNS hostname).") 719 collectDir = property(_getCollectDir, _setCollectDir, None, "Path to the peer's collect directory (an absolute local path).") 720 workingDir = property(_getWorkingDir, _setWorkingDir, None, "Path to the peer's working directory (an absolute local path).") 721 remoteUser = property(_getRemoteUser, _setRemoteUser, None, "Name of the Cedar Backup user on the remote peer.") 722 localUser = property(_getLocalUser, _setLocalUser, None, "Name of the Cedar Backup user on the current host.") 723 rcpCommand = property(_getRcpCommand, _setRcpCommand, None, "An rcp-compatible copy command to use for copying files.") 724 rshCommand = property(_getRshCommand, _setRshCommand, None, "An rsh-compatible command to use for remote shells to the peer.") 725 cbackCommand = property(_getCbackCommand, _setCbackCommand, None, "A chack-compatible command to use for executing managed actions.") 726 ignoreFailureMode = property(_getIgnoreFailureMode, _setIgnoreFailureMode, None, "Ignore failure mode for peer.") 727 728 729 ################# 730 # Public methods 731 ################# 732
    733 - def stagePeer(self, targetDir, ownership=None, permissions=None):
    734 """ 735 Stages data from the peer into the indicated local target directory. 736 737 The target directory must already exist before this method is called. If 738 passed in, ownership and permissions will be applied to the files that 739 are copied. 740 741 @note: The returned count of copied files might be inaccurate if some of 742 the copied files already existed in the staging directory prior to the 743 copy taking place. We don't clear the staging directory first, because 744 some extension might also be using it. 745 746 @note: If you have user/group as strings, call the L{util.getUidGid} function 747 to get the associated uid/gid as an ownership tuple. 748 749 @note: Unlike the local peer version of this method, an I/O error might 750 or might not be raised if the directory is empty. Since we're using a 751 remote copy method, we just don't have the fine-grained control over our 752 exceptions that's available when we can look directly at the filesystem, 753 and we can't control whether the remote copy method thinks an empty 754 directory is an error. 755 756 @param targetDir: Target directory to write data into 757 @type targetDir: String representing a directory on disk 758 759 @param ownership: Owner and group that the staged files should have 760 @type ownership: Tuple of numeric ids C{(uid, gid)} 761 762 @param permissions: Permissions that the staged files should have 763 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 764 765 @return: Number of files copied from the source directory to the target directory. 766 767 @raise ValueError: If target directory is not a directory, does not exist or is not absolute. 768 @raise ValueError: If a path cannot be encoded properly. 769 @raise IOError: If there were no files to stage (i.e. the directory was empty) 770 @raise IOError: If there is an IO error copying a file. 771 @raise OSError: If there is an OS error copying or changing permissions on a file 772 """ 773 targetDir = encodePath(targetDir) 774 if not os.path.isabs(targetDir): 775 logger.debug("Target directory [%s] not an absolute path.", targetDir) 776 raise ValueError("Target directory must be an absolute path.") 777 if not os.path.exists(targetDir) or not os.path.isdir(targetDir): 778 logger.debug("Target directory [%s] is not a directory or does not exist on disk.", targetDir) 779 raise ValueError("Target directory is not a directory or does not exist on disk.") 780 count = RemotePeer._copyRemoteDir(self.remoteUser, self.localUser, self.name, 781 self._rcpCommand, self._rcpCommandList, 782 self.collectDir, targetDir, 783 ownership, permissions) 784 if count == 0: 785 raise IOError("Did not copy any files from local peer.") 786 return count
    787
    788 - def checkCollectIndicator(self, collectIndicator=None):
    789 """ 790 Checks the collect indicator in the peer's staging directory. 791 792 When a peer has completed collecting its backup files, it will write an 793 empty indicator file into its collect directory. This method checks to 794 see whether that indicator has been written. If the remote copy command 795 fails, we return C{False} as if the file weren't there. 796 797 If you need to, you can override the name of the collect indicator file 798 by passing in a different name. 799 800 @note: Apparently, we can't count on all rcp-compatible implementations 801 to return sensible errors for some error conditions. As an example, the 802 C{scp} command in Debian 'woody' returns a zero (normal) status even when 803 it can't find a host or if the login or path is invalid. Because of 804 this, the implementation of this method is rather convoluted. 805 806 @param collectIndicator: Name of the collect indicator file to check 807 @type collectIndicator: String representing name of a file in the collect directory 808 809 @return: Boolean true/false depending on whether the indicator exists. 810 @raise ValueError: If a path cannot be encoded properly. 811 """ 812 try: 813 if collectIndicator is None: 814 sourceFile = os.path.join(self.collectDir, DEF_COLLECT_INDICATOR) 815 targetFile = os.path.join(self.workingDir, DEF_COLLECT_INDICATOR) 816 else: 817 collectIndicator = encodePath(collectIndicator) 818 sourceFile = os.path.join(self.collectDir, collectIndicator) 819 targetFile = os.path.join(self.workingDir, collectIndicator) 820 logger.debug("Fetch remote [%s] into [%s].", sourceFile, targetFile) 821 if os.path.exists(targetFile): 822 try: 823 os.remove(targetFile) 824 except: 825 raise Exception("Error: collect indicator [%s] already exists!" % targetFile) 826 try: 827 RemotePeer._copyRemoteFile(self.remoteUser, self.localUser, self.name, 828 self._rcpCommand, self._rcpCommandList, 829 sourceFile, targetFile, 830 overwrite=False) 831 if os.path.exists(targetFile): 832 return True 833 else: 834 return False 835 except Exception, e: 836 logger.info("Failed looking for collect indicator: %s", e) 837 return False 838 finally: 839 if os.path.exists(targetFile): 840 try: 841 os.remove(targetFile) 842 except: pass
    843
    844 - def writeStageIndicator(self, stageIndicator=None):
    845 """ 846 Writes the stage indicator in the peer's staging directory. 847 848 When the master has completed collecting its backup files, it will write 849 an empty indicator file into the peer's collect directory. The presence 850 of this file implies that the staging process is complete. 851 852 If you need to, you can override the name of the stage indicator file by 853 passing in a different name. 854 855 @note: If you have user/group as strings, call the L{util.getUidGid} function 856 to get the associated uid/gid as an ownership tuple. 857 858 @param stageIndicator: Name of the indicator file to write 859 @type stageIndicator: String representing name of a file in the collect directory 860 861 @raise ValueError: If a path cannot be encoded properly. 862 @raise IOError: If there is an IO error creating the file. 863 @raise OSError: If there is an OS error creating or changing permissions on the file 864 """ 865 stageIndicator = encodePath(stageIndicator) 866 if stageIndicator is None: 867 sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) 868 targetFile = os.path.join(self.collectDir, DEF_STAGE_INDICATOR) 869 else: 870 sourceFile = os.path.join(self.workingDir, DEF_STAGE_INDICATOR) 871 targetFile = os.path.join(self.collectDir, stageIndicator) 872 try: 873 if not os.path.exists(sourceFile): 874 open(sourceFile, "w").write("") 875 RemotePeer._pushLocalFile(self.remoteUser, self.localUser, self.name, 876 self._rcpCommand, self._rcpCommandList, 877 sourceFile, targetFile) 878 finally: 879 if os.path.exists(sourceFile): 880 try: 881 os.remove(sourceFile) 882 except: pass
    883
    884 - def executeRemoteCommand(self, command):
    885 """ 886 Executes a command on the peer via remote shell. 887 888 @param command: Command to execute 889 @type command: String command-line suitable for use with rsh. 890 891 @raise IOError: If there is an error executing the command on the remote peer. 892 """ 893 RemotePeer._executeRemoteCommand(self.remoteUser, self.localUser, 894 self.name, self._rshCommand, 895 self._rshCommandList, command)
    896
    897 - def executeManagedAction(self, action, fullBackup):
    898 """ 899 Executes a managed action on this peer. 900 901 @param action: Name of the action to execute. 902 @param fullBackup: Whether a full backup should be executed. 903 904 @raise IOError: If there is an error executing the action on the remote peer. 905 """ 906 try: 907 command = RemotePeer._buildCbackCommand(self.cbackCommand, action, fullBackup) 908 self.executeRemoteCommand(command) 909 except IOError, e: 910 logger.info(e) 911 raise IOError("Failed to execute action [%s] on managed client [%s]." % (action, self.name))
    912 913 914 ################## 915 # Private methods 916 ################## 917 918 @staticmethod
    919 - def _getDirContents(path):
    920 """ 921 Returns the contents of a directory in terms of a Set. 922 923 The directory's contents are read as a L{FilesystemList} containing only 924 files, and then the list is converted into a set object for later use. 925 926 @param path: Directory path to get contents for 927 @type path: String representing a path on disk 928 929 @return: Set of files in the directory 930 @raise ValueError: If path is not a directory or does not exist. 931 """ 932 contents = FilesystemList() 933 contents.excludeDirs = True 934 contents.excludeLinks = True 935 contents.addDirContents(path) 936 try: 937 return set(contents) 938 except: 939 import sets 940 return sets.Set(contents)
    941 942 @staticmethod
    943 - def _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, 944 sourceDir, targetDir, ownership=None, permissions=None):
    945 """ 946 Copies files from the source directory to the target directory. 947 948 This function is not recursive. Only the files in the directory will be 949 copied. Ownership and permissions will be left at their default values 950 if new values are not specified. Behavior when copying soft links from 951 the collect directory is dependent on the behavior of the specified rcp 952 command. 953 954 @note: The returned count of copied files might be inaccurate if some of 955 the copied files already existed in the staging directory prior to the 956 copy taking place. We don't clear the staging directory first, because 957 some extension might also be using it. 958 959 @note: If you have user/group as strings, call the L{util.getUidGid} function 960 to get the associated uid/gid as an ownership tuple. 961 962 @note: We don't have a good way of knowing exactly what files we copied 963 down from the remote peer, unless we want to parse the output of the rcp 964 command (ugh). We could change permissions on everything in the target 965 directory, but that's kind of ugly too. Instead, we use Python's set 966 functionality to figure out what files were added while we executed the 967 rcp command. This isn't perfect - for instance, it's not correct if 968 someone else is messing with the directory at the same time we're doing 969 the remote copy - but it's about as good as we're going to get. 970 971 @note: Apparently, we can't count on all rcp-compatible implementations 972 to return sensible errors for some error conditions. As an example, the 973 C{scp} command in Debian 'woody' returns a zero (normal) status even 974 when it can't find a host or if the login or path is invalid. We try 975 to work around this by issuing C{IOError} if we don't copy any files from 976 the remote host. 977 978 @param remoteUser: Name of the Cedar Backup user on the remote peer 979 @type remoteUser: String representing a username, valid via the copy command 980 981 @param localUser: Name of the Cedar Backup user on the current host 982 @type localUser: String representing a username, valid on the current host 983 984 @param remoteHost: Hostname of the remote peer 985 @type remoteHost: String representing a hostname, accessible via the copy command 986 987 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 988 @type rcpCommand: String representing a system command including required arguments 989 990 @param rcpCommandList: An rcp-compatible copy command to use for copying files 991 @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} 992 993 @param sourceDir: Source directory 994 @type sourceDir: String representing a directory on disk 995 996 @param targetDir: Target directory 997 @type targetDir: String representing a directory on disk 998 999 @param ownership: Owner and group that the copied files should have 1000 @type ownership: Tuple of numeric ids C{(uid, gid)} 1001 1002 @param permissions: Permissions that the staged files should have 1003 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 1004 1005 @return: Number of files copied from the source directory to the target directory. 1006 1007 @raise ValueError: If source or target is not a directory or does not exist. 1008 @raise IOError: If there is an IO error copying the files. 1009 """ 1010 beforeSet = RemotePeer._getDirContents(targetDir) 1011 if localUser is not None: 1012 try: 1013 if not isRunningAsRoot(): 1014 raise IOError("Only root can remote copy as another user.") 1015 except AttributeError: pass 1016 actualCommand = "%s %s@%s:%s/* %s" % (rcpCommand, remoteUser, remoteHost, sourceDir, targetDir) 1017 command = resolveCommand(SU_COMMAND) 1018 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1019 if result != 0: 1020 raise IOError("Error (%d) copying files from remote host as local user [%s]." % (result, localUser)) 1021 else: 1022 copySource = "%s@%s:%s/*" % (remoteUser, remoteHost, sourceDir) 1023 command = resolveCommand(rcpCommandList) 1024 result = executeCommand(command, [copySource, targetDir])[0] 1025 if result != 0: 1026 raise IOError("Error (%d) copying files from remote host." % result) 1027 afterSet = RemotePeer._getDirContents(targetDir) 1028 if len(afterSet) == 0: 1029 raise IOError("Did not copy any files from remote peer.") 1030 differenceSet = afterSet.difference(beforeSet) # files we added as part of copy 1031 if len(differenceSet) == 0: 1032 raise IOError("Apparently did not copy any new files from remote peer.") 1033 for targetFile in differenceSet: 1034 if ownership is not None: 1035 os.chown(targetFile, ownership[0], ownership[1]) 1036 if permissions is not None: 1037 os.chmod(targetFile, permissions) 1038 return len(differenceSet)
    1039 1040 @staticmethod
    1041 - def _copyRemoteFile(remoteUser, localUser, remoteHost, 1042 rcpCommand, rcpCommandList, 1043 sourceFile, targetFile, ownership=None, 1044 permissions=None, overwrite=True):
    1045 """ 1046 Copies a remote source file to a target file. 1047 1048 @note: Internally, we have to go through and escape any spaces in the 1049 source path with double-backslash, otherwise things get screwed up. It 1050 doesn't seem to be required in the target path. I hope this is portable 1051 to various different rcp methods, but I guess it might not be (all I have 1052 to test with is OpenSSH). 1053 1054 @note: If you have user/group as strings, call the L{util.getUidGid} function 1055 to get the associated uid/gid as an ownership tuple. 1056 1057 @note: We will not overwrite a target file that exists when this method 1058 is invoked. If the target already exists, we'll raise an exception. 1059 1060 @note: Apparently, we can't count on all rcp-compatible implementations 1061 to return sensible errors for some error conditions. As an example, the 1062 C{scp} command in Debian 'woody' returns a zero (normal) status even when 1063 it can't find a host or if the login or path is invalid. We try to work 1064 around this by issuing C{IOError} the target file does not exist when 1065 we're done. 1066 1067 @param remoteUser: Name of the Cedar Backup user on the remote peer 1068 @type remoteUser: String representing a username, valid via the copy command 1069 1070 @param remoteHost: Hostname of the remote peer 1071 @type remoteHost: String representing a hostname, accessible via the copy command 1072 1073 @param localUser: Name of the Cedar Backup user on the current host 1074 @type localUser: String representing a username, valid on the current host 1075 1076 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 1077 @type rcpCommand: String representing a system command including required arguments 1078 1079 @param rcpCommandList: An rcp-compatible copy command to use for copying files 1080 @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} 1081 1082 @param sourceFile: Source file to copy 1083 @type sourceFile: String representing a file on disk, as an absolute path 1084 1085 @param targetFile: Target file to create 1086 @type targetFile: String representing a file on disk, as an absolute path 1087 1088 @param ownership: Owner and group that the copied should have 1089 @type ownership: Tuple of numeric ids C{(uid, gid)} 1090 1091 @param permissions: Permissions that the staged files should have 1092 @type permissions: UNIX permissions mode, specified in octal (i.e. C{0640}). 1093 1094 @param overwrite: Indicates whether it's OK to overwrite the target file. 1095 @type overwrite: Boolean true/false. 1096 1097 @raise IOError: If the target file already exists. 1098 @raise IOError: If there is an IO error copying the file 1099 @raise OSError: If there is an OS error changing permissions on the file 1100 """ 1101 if not overwrite: 1102 if os.path.exists(targetFile): 1103 raise IOError("Target file [%s] already exists." % targetFile) 1104 if localUser is not None: 1105 try: 1106 if not isRunningAsRoot(): 1107 raise IOError("Only root can remote copy as another user.") 1108 except AttributeError: pass 1109 actualCommand = "%s %s@%s:%s %s" % (rcpCommand, remoteUser, remoteHost, sourceFile.replace(" ", "\\ "), targetFile) 1110 command = resolveCommand(SU_COMMAND) 1111 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1112 if result != 0: 1113 raise IOError("Error (%d) copying [%s] from remote host as local user [%s]." % (result, sourceFile, localUser)) 1114 else: 1115 copySource = "%s@%s:%s" % (remoteUser, remoteHost, sourceFile.replace(" ", "\\ ")) 1116 command = resolveCommand(rcpCommandList) 1117 result = executeCommand(command, [copySource, targetFile])[0] 1118 if result != 0: 1119 raise IOError("Error (%d) copying [%s] from remote host." % (result, sourceFile)) 1120 if not os.path.exists(targetFile): 1121 raise IOError("Apparently unable to copy file from remote host.") 1122 if ownership is not None: 1123 os.chown(targetFile, ownership[0], ownership[1]) 1124 if permissions is not None: 1125 os.chmod(targetFile, permissions)
    1126 1127 @staticmethod
    1128 - def _pushLocalFile(remoteUser, localUser, remoteHost, 1129 rcpCommand, rcpCommandList, 1130 sourceFile, targetFile, overwrite=True):
    1131 """ 1132 Copies a local source file to a remote host. 1133 1134 @note: We will not overwrite a target file that exists when this method 1135 is invoked. If the target already exists, we'll raise an exception. 1136 1137 @note: Internally, we have to go through and escape any spaces in the 1138 source and target paths with double-backslash, otherwise things get 1139 screwed up. I hope this is portable to various different rcp methods, 1140 but I guess it might not be (all I have to test with is OpenSSH). 1141 1142 @note: If you have user/group as strings, call the L{util.getUidGid} function 1143 to get the associated uid/gid as an ownership tuple. 1144 1145 @param remoteUser: Name of the Cedar Backup user on the remote peer 1146 @type remoteUser: String representing a username, valid via the copy command 1147 1148 @param localUser: Name of the Cedar Backup user on the current host 1149 @type localUser: String representing a username, valid on the current host 1150 1151 @param remoteHost: Hostname of the remote peer 1152 @type remoteHost: String representing a hostname, accessible via the copy command 1153 1154 @param rcpCommand: An rcp-compatible copy command to use for copying files from the peer 1155 @type rcpCommand: String representing a system command including required arguments 1156 1157 @param rcpCommandList: An rcp-compatible copy command to use for copying files 1158 @type rcpCommandList: Command as a list to be passed to L{util.executeCommand} 1159 1160 @param sourceFile: Source file to copy 1161 @type sourceFile: String representing a file on disk, as an absolute path 1162 1163 @param targetFile: Target file to create 1164 @type targetFile: String representing a file on disk, as an absolute path 1165 1166 @param overwrite: Indicates whether it's OK to overwrite the target file. 1167 @type overwrite: Boolean true/false. 1168 1169 @raise IOError: If there is an IO error copying the file 1170 @raise OSError: If there is an OS error changing permissions on the file 1171 """ 1172 if not overwrite: 1173 if os.path.exists(targetFile): 1174 raise IOError("Target file [%s] already exists." % targetFile) 1175 if localUser is not None: 1176 try: 1177 if not isRunningAsRoot(): 1178 raise IOError("Only root can remote copy as another user.") 1179 except AttributeError: pass 1180 actualCommand = '%s "%s" "%s@%s:%s"' % (rcpCommand, sourceFile, remoteUser, remoteHost, targetFile) 1181 command = resolveCommand(SU_COMMAND) 1182 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1183 if result != 0: 1184 raise IOError("Error (%d) copying [%s] to remote host as local user [%s]." % (result, sourceFile, localUser)) 1185 else: 1186 copyTarget = "%s@%s:%s" % (remoteUser, remoteHost, targetFile.replace(" ", "\\ ")) 1187 command = resolveCommand(rcpCommandList) 1188 result = executeCommand(command, [sourceFile.replace(" ", "\\ "), copyTarget])[0] 1189 if result != 0: 1190 raise IOError("Error (%d) copying [%s] to remote host." % (result, sourceFile))
    1191 1192 @staticmethod
    1193 - def _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand):
    1194 """ 1195 Executes a command on the peer via remote shell. 1196 1197 @param remoteUser: Name of the Cedar Backup user on the remote peer 1198 @type remoteUser: String representing a username, valid on the remote host 1199 1200 @param localUser: Name of the Cedar Backup user on the current host 1201 @type localUser: String representing a username, valid on the current host 1202 1203 @param remoteHost: Hostname of the remote peer 1204 @type remoteHost: String representing a hostname, accessible via the copy command 1205 1206 @param rshCommand: An rsh-compatible copy command to use for remote shells to the peer 1207 @type rshCommand: String representing a system command including required arguments 1208 1209 @param rshCommandList: An rsh-compatible copy command to use for remote shells to the peer 1210 @type rshCommandList: Command as a list to be passed to L{util.executeCommand} 1211 1212 @param remoteCommand: The command to be executed on the remote host 1213 @type remoteCommand: String command-line, with no special shell characters ($, <, etc.) 1214 1215 @raise IOError: If there is an error executing the remote command 1216 """ 1217 actualCommand = "%s %s@%s '%s'" % (rshCommand, remoteUser, remoteHost, remoteCommand) 1218 if localUser is not None: 1219 try: 1220 if not isRunningAsRoot(): 1221 raise IOError("Only root can remote shell as another user.") 1222 except AttributeError: pass 1223 command = resolveCommand(SU_COMMAND) 1224 result = executeCommand(command, [localUser, "-c", actualCommand])[0] 1225 if result != 0: 1226 raise IOError("Command failed [su -c %s \"%s\"]" % (localUser, actualCommand)) 1227 else: 1228 command = resolveCommand(rshCommandList) 1229 result = executeCommand(command, ["%s@%s" % (remoteUser, remoteHost), "%s" % remoteCommand])[0] 1230 if result != 0: 1231 raise IOError("Command failed [%s]" % (actualCommand))
    1232 1233 @staticmethod
    1234 - def _buildCbackCommand(cbackCommand, action, fullBackup):
    1235 """ 1236 Builds a Cedar Backup command line for the named action. 1237 1238 @note: If the cback command is None, then DEF_CBACK_COMMAND is used. 1239 1240 @param cbackCommand: cback command to execute, including required options 1241 @param action: Name of the action to execute. 1242 @param fullBackup: Whether a full backup should be executed. 1243 1244 @return: String suitable for passing to L{_executeRemoteCommand} as remoteCommand. 1245 @raise ValueError: If action is None. 1246 """ 1247 if action is None: 1248 raise ValueError("Action cannot be None.") 1249 if cbackCommand is None: 1250 cbackCommand = DEF_CBACK_COMMAND 1251 if fullBackup: 1252 return "%s --full %s" % (cbackCommand, action) 1253 else: 1254 return "%s %s" % (cbackCommand, action)
    1255

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.split.LocalConfig-class.html0000664000175000017500000007433312642035644031321 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.split.LocalConfig
    Package CedarBackup2 :: Package extend :: Module split :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit split-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <split> configuration section as the next child of a parent.
    source code
     
    _setSplit(self, value)
    Property target used to set the split configuration value.
    source code
     
    _getSplit(self)
    Property target used to get the split configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseSplit(parent)
    Parses an split configuration section.
    source code
    Properties [hide private]
      split
    Split configuration in terms of a SplitConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    Split configuration must be filled in. Within that, both the size limit and split size must be filled in.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <split> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      sizeLimit      //cb_config/split/size_limit
      splitSize      //cb_config/split/split_size
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setSplit(self, value)

    source code 

    Property target used to set the split configuration value. If not None, the value must be a SplitConfig object.

    Raises:
    • ValueError - If the value is not a SplitConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the split configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseSplit(parent)
    Static Method

    source code 

    Parses an split configuration section.

    We read the following individual fields:

      sizeLimit      //cb_config/split/size_limit
      splitSize      //cb_config/split/split_size
    
    Parameters:
    • parent - Parent node to search beneath.
    Returns:
    EncryptConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    split

    Split configuration in terms of a SplitConfig object.

    Get Method:
    _getSplit(self) - Property target used to get the split configuration value.
    Set Method:
    _setSplit(self, value) - Property target used to set the split configuration value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers-module.html0000664000175000017500000001547112642035643026375 0ustar pronovicpronovic00000000000000 CedarBackup2.writers
    Package CedarBackup2 :: Package writers
    [hide private]
    [frames] | no frames]

    Package writers

    source code

    Cedar Backup writers.

    This package consolidates all of the modules that implenent "image writer" functionality, including utilities and specific writer implementations.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Submodules [hide private]

    Variables [hide private]
      __package__ = None
    hash(x)
    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.cdwriter.MediaDefinition-class.html0000664000175000017500000005175412642035644033073 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter.MediaDefinition
    Package CedarBackup2 :: Package writers :: Module cdwriter :: Class MediaDefinition
    [hide private]
    [frames] | no frames]

    Class MediaDefinition

    source code

    object --+
             |
            MediaDefinition
    

    Class encapsulating information about CD media definitions.

    The following media types are accepted:

    • MEDIA_CDR_74: 74-minute CD-R media (650 MB capacity)
    • MEDIA_CDRW_74: 74-minute CD-RW media (650 MB capacity)
    • MEDIA_CDR_80: 80-minute CD-R media (700 MB capacity)
    • MEDIA_CDRW_80: 80-minute CD-RW media (700 MB capacity)

    Note that all of the capacities associated with a media definition are in terms of ISO sectors (util.ISO_SECTOR_SIZE).

    Instance Methods [hide private]
     
    __init__(self, mediaType)
    Creates a media definition for the indicated media type.
    source code
     
    _setValues(self, mediaType)
    Sets values based on media type.
    source code
     
    _getMediaType(self)
    Property target used to get the media type value.
    source code
     
    _getRewritable(self)
    Property target used to get the rewritable flag value.
    source code
     
    _getInitialLeadIn(self)
    Property target used to get the initial lead-in value.
    source code
     
    _getLeadIn(self)
    Property target used to get the lead-in value.
    source code
     
    _getCapacity(self)
    Property target used to get the capacity value.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]
      mediaType
    Configured media type.
      rewritable
    Boolean indicating whether the media is rewritable.
      initialLeadIn
    Initial lead-in required for first image written to media.
      leadIn
    Lead-in required on successive images written to media.
      capacity
    Total capacity of the media before any required lead-in.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, mediaType)
    (Constructor)

    source code 

    Creates a media definition for the indicated media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.
    Overrides: object.__init__

    _setValues(self, mediaType)

    source code 

    Sets values based on media type.

    Parameters:
    • mediaType - Type of the media, as discussed above.
    Raises:
    • ValueError - If the media type is unknown or unsupported.

    Property Details [hide private]

    mediaType

    Configured media type.

    Get Method:
    _getMediaType(self) - Property target used to get the media type value.

    rewritable

    Boolean indicating whether the media is rewritable.

    Get Method:
    _getRewritable(self) - Property target used to get the rewritable flag value.

    initialLeadIn

    Initial lead-in required for first image written to media.

    Get Method:
    _getInitialLeadIn(self) - Property target used to get the initial lead-in value.

    leadIn

    Lead-in required on successive images written to media.

    Get Method:
    _getLeadIn(self) - Property target used to get the lead-in value.

    capacity

    Total capacity of the media before any required lead-in.

    Get Method:
    _getCapacity(self) - Property target used to get the capacity value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.store-pysrc.html0000664000175000017500000046565412642035644027361 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.store
    Package CedarBackup2 :: Package actions :: Module store
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.store

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Implements the standard 'store' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'store' action. 
     40  @sort: executeStore, writeImage, writeStoreIndicator, consistencyCheck 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  @author: Dmitry Rutsky <rutsky@inbox.ru> 
     43  """ 
     44   
     45   
     46  ######################################################################## 
     47  # Imported modules 
     48  ######################################################################## 
     49   
     50  # System modules 
     51  import sys 
     52  import os 
     53  import logging 
     54  import datetime 
     55  import tempfile 
     56   
     57  # Cedar Backup modules 
     58  from CedarBackup2.filesystem import compareContents 
     59  from CedarBackup2.util import isStartOfWeek 
     60  from CedarBackup2.util import mount, unmount, displayBytes 
     61  from CedarBackup2.actions.util import createWriter, checkMediaState, buildMediaLabel, writeIndicatorFile 
     62  from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR, STORE_INDICATOR 
     63   
     64   
     65  ######################################################################## 
     66  # Module-wide constants and variables 
     67  ######################################################################## 
     68   
     69  logger = logging.getLogger("CedarBackup2.log.actions.store") 
     70   
     71   
     72  ######################################################################## 
     73  # Public functions 
     74  ######################################################################## 
     75   
     76  ########################## 
     77  # executeStore() function 
     78  ########################## 
     79   
    
    80 -def executeStore(configPath, options, config):
    81 """ 82 Executes the store backup action. 83 84 @note: The rebuild action and the store action are very similar. The 85 main difference is that while store only stores a single day's staging 86 directory, the rebuild action operates on multiple staging directories. 87 88 @note: When the store action is complete, we will write a store indicator to 89 the daily staging directory we used, so it's obvious that the store action 90 has completed. 91 92 @param configPath: Path to configuration file on disk. 93 @type configPath: String representing a path on disk. 94 95 @param options: Program command-line options. 96 @type options: Options object. 97 98 @param config: Program configuration. 99 @type config: Config object. 100 101 @raise ValueError: Under many generic error conditions 102 @raise IOError: If there are problems reading or writing files. 103 """ 104 logger.debug("Executing the 'store' action.") 105 if sys.platform == "darwin": 106 logger.warn("Warning: the store action is not fully supported on Mac OS X.") 107 logger.warn("See the Cedar Backup software manual for further information.") 108 if config.options is None or config.store is None: 109 raise ValueError("Store configuration is not properly filled in.") 110 if config.store.checkMedia: 111 checkMediaState(config.store) # raises exception if media is not initialized 112 rebuildMedia = options.full 113 logger.debug("Rebuild media flag [%s]", rebuildMedia) 114 todayIsStart = isStartOfWeek(config.options.startingDay) 115 stagingDirs = _findCorrectDailyDir(options, config) 116 writeImageBlankSafe(config, rebuildMedia, todayIsStart, config.store.blankBehavior, stagingDirs) 117 if config.store.checkData: 118 if sys.platform == "darwin": 119 logger.warn("Warning: consistency check cannot be run successfully on Mac OS X.") 120 logger.warn("See the Cedar Backup software manual for further information.") 121 else: 122 logger.debug("Running consistency check of media.") 123 consistencyCheck(config, stagingDirs) 124 writeStoreIndicator(config, stagingDirs) 125 logger.info("Executed the 'store' action successfully.")
    126 127 128 ######################## 129 # writeImage() function 130 ######################## 131
    132 -def writeImage(config, newDisc, stagingDirs):
    133 """ 134 Builds and writes an ISO image containing the indicated stage directories. 135 136 The generated image will contain each of the staging directories listed in 137 C{stagingDirs}. The directories will be placed into the image at the root by 138 date, so staging directory C{/opt/stage/2005/02/10} will be placed into the 139 disc at C{/2005/02/10}. 140 141 @note: This function is implemented in terms of L{writeImageBlankSafe}. The 142 C{newDisc} flag is passed in for both C{rebuildMedia} and C{todayIsStart}. 143 144 @param config: Config object. 145 @param newDisc: Indicates whether the disc should be re-initialized 146 @param stagingDirs: Dictionary mapping directory path to date suffix. 147 148 @raise ValueError: Under many generic error conditions 149 @raise IOError: If there is a problem writing the image to disc. 150 """ 151 writeImageBlankSafe(config, newDisc, newDisc, None, stagingDirs)
    152 153 154 ################################# 155 # writeImageBlankSafe() function 156 ################################# 157
    158 -def writeImageBlankSafe(config, rebuildMedia, todayIsStart, blankBehavior, stagingDirs):
    159 """ 160 Builds and writes an ISO image containing the indicated stage directories. 161 162 The generated image will contain each of the staging directories listed in 163 C{stagingDirs}. The directories will be placed into the image at the root by 164 date, so staging directory C{/opt/stage/2005/02/10} will be placed into the 165 disc at C{/2005/02/10}. The media will always be written with a media 166 label specific to Cedar Backup. 167 168 This function is similar to L{writeImage}, but tries to implement a smarter 169 blanking strategy. 170 171 First, the media is always blanked if the C{rebuildMedia} flag is true. 172 Then, if C{rebuildMedia} is false, blanking behavior and C{todayIsStart} 173 come into effect:: 174 175 If no blanking behavior is specified, and it is the start of the week, 176 the disc will be blanked 177 178 If blanking behavior is specified, and either the blank mode is "daily" 179 or the blank mode is "weekly" and it is the start of the week, then 180 the disc will be blanked if it looks like the weekly backup will not 181 fit onto the media. 182 183 Otherwise, the disc will not be blanked 184 185 How do we decide whether the weekly backup will fit onto the media? That is 186 what the blanking factor is used for. The following formula is used:: 187 188 will backup fit? = (bytes available / (1 + bytes required) <= blankFactor 189 190 The blanking factor will vary from setup to setup, and will probably 191 require some experimentation to get it right. 192 193 @param config: Config object. 194 @param rebuildMedia: Indicates whether media should be rebuilt 195 @param todayIsStart: Indicates whether today is the starting day of the week 196 @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior 197 @param stagingDirs: Dictionary mapping directory path to date suffix. 198 199 @raise ValueError: Under many generic error conditions 200 @raise IOError: If there is a problem writing the image to disc. 201 """ 202 mediaLabel = buildMediaLabel() 203 writer = createWriter(config) 204 writer.initializeImage(True, config.options.workingDir, mediaLabel) # default value for newDisc 205 for stageDir in stagingDirs.keys(): 206 logger.debug("Adding stage directory [%s].", stageDir) 207 dateSuffix = stagingDirs[stageDir] 208 writer.addImageEntry(stageDir, dateSuffix) 209 newDisc = _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior) 210 writer.setImageNewDisc(newDisc) 211 writer.writeImage()
    212
    213 -def _getNewDisc(writer, rebuildMedia, todayIsStart, blankBehavior):
    214 """ 215 Gets a value for the newDisc flag based on blanking factor rules. 216 217 The blanking factor rules are described above by L{writeImageBlankSafe}. 218 219 @param writer: Previously configured image writer containing image entries 220 @param rebuildMedia: Indicates whether media should be rebuilt 221 @param todayIsStart: Indicates whether today is the starting day of the week 222 @param blankBehavior: Blank behavior from configuration, or C{None} to use default behavior 223 224 @return: newDisc flag to be set on writer. 225 """ 226 newDisc = False 227 if rebuildMedia: 228 newDisc = True 229 logger.debug("Setting new disc flag based on rebuildMedia flag.") 230 else: 231 if blankBehavior is None: 232 logger.debug("Default media blanking behavior is in effect.") 233 if todayIsStart: 234 newDisc = True 235 logger.debug("Setting new disc flag based on todayIsStart.") 236 else: 237 # note: validation says we can assume that behavior is fully filled in if it exists at all 238 logger.debug("Optimized media blanking behavior is in effect based on configuration.") 239 if blankBehavior.blankMode == "daily" or (blankBehavior.blankMode == "weekly" and todayIsStart): 240 logger.debug("New disc flag will be set based on blank factor calculation.") 241 blankFactor = float(blankBehavior.blankFactor) 242 logger.debug("Configured blanking factor: %.2f", blankFactor) 243 available = writer.retrieveCapacity().bytesAvailable 244 logger.debug("Bytes available: %s", displayBytes(available)) 245 required = writer.getEstimatedImageSize() 246 logger.debug("Bytes required: %s", displayBytes(required)) 247 ratio = available / (1.0 + required) 248 logger.debug("Calculated ratio: %.2f", ratio) 249 newDisc = (ratio <= blankFactor) 250 logger.debug("%.2f <= %.2f ? %s", ratio, blankFactor, newDisc) 251 else: 252 logger.debug("No blank factor calculation is required based on configuration.") 253 logger.debug("New disc flag [%s].", newDisc) 254 return newDisc
    255 256 257 ################################# 258 # writeStoreIndicator() function 259 ################################# 260
    261 -def writeStoreIndicator(config, stagingDirs):
    262 """ 263 Writes a store indicator file into staging directories. 264 265 The store indicator is written into each of the staging directories when 266 either a store or rebuild action has written the staging directory to disc. 267 268 @param config: Config object. 269 @param stagingDirs: Dictionary mapping directory path to date suffix. 270 """ 271 for stagingDir in stagingDirs.keys(): 272 writeIndicatorFile(stagingDir, STORE_INDICATOR, 273 config.options.backupUser, 274 config.options.backupGroup)
    275 276 277 ############################## 278 # consistencyCheck() function 279 ############################## 280
    281 -def consistencyCheck(config, stagingDirs):
    282 """ 283 Runs a consistency check against media in the backup device. 284 285 It seems that sometimes, it's possible to create a corrupted multisession 286 disc (i.e. one that cannot be read) although no errors were encountered 287 while writing the disc. This consistency check makes sure that the data 288 read from disc matches the data that was used to create the disc. 289 290 The function mounts the device at a temporary mount point in the working 291 directory, and then compares the indicated staging directories in the 292 staging directory and on the media. The comparison is done via 293 functionality in C{filesystem.py}. 294 295 If no exceptions are thrown, there were no problems with the consistency 296 check. A positive confirmation of "no problems" is also written to the log 297 with C{info} priority. 298 299 @warning: The implementation of this function is very UNIX-specific. 300 301 @param config: Config object. 302 @param stagingDirs: Dictionary mapping directory path to date suffix. 303 304 @raise ValueError: If the two directories are not equivalent. 305 @raise IOError: If there is a problem working with the media. 306 """ 307 logger.debug("Running consistency check.") 308 mountPoint = tempfile.mkdtemp(dir=config.options.workingDir) 309 try: 310 mount(config.store.devicePath, mountPoint, "iso9660") 311 for stagingDir in stagingDirs.keys(): 312 discDir = os.path.join(mountPoint, stagingDirs[stagingDir]) 313 logger.debug("Checking [%s] vs. [%s].", stagingDir, discDir) 314 compareContents(stagingDir, discDir, verbose=True) 315 logger.info("Consistency check completed for [%s]. No problems found.", stagingDir) 316 finally: 317 unmount(mountPoint, True, 5, 1) # try 5 times, and remove mount point when done
    318 319 320 ######################################################################## 321 # Private utility functions 322 ######################################################################## 323 324 ######################### 325 # _findCorrectDailyDir() 326 ######################### 327
    328 -def _findCorrectDailyDir(options, config):
    329 """ 330 Finds the correct daily staging directory to be written to disk. 331 332 In Cedar Backup v1.0, we assumed that the correct staging directory matched 333 the current date. However, that has problems. In particular, it breaks 334 down if collect is on one side of midnite and stage is on the other, or if 335 certain processes span midnite. 336 337 For v2.0, I'm trying to be smarter. I'll first check the current day. If 338 that directory is found, it's good enough. If it's not found, I'll look for 339 a valid directory from the day before or day after I{which has not yet been 340 staged, according to the stage indicator file}. The first one I find, I'll 341 use. If I use a directory other than for the current day I{and} 342 C{config.store.warnMidnite} is set, a warning will be put in the log. 343 344 There is one exception to this rule. If the C{options.full} flag is set, 345 then the special "span midnite" logic will be disabled and any existing 346 store indicator will be ignored. I did this because I think that most users 347 who run C{cback --full store} twice in a row expect the command to generate 348 two identical discs. With the other rule in place, running that command 349 twice in a row could result in an error ("no unstored directory exists") or 350 could even cause a completely unexpected directory to be written to disc (if 351 some previous day's contents had not yet been written). 352 353 @note: This code is probably longer and more verbose than it needs to be, 354 but at least it's straightforward. 355 356 @param options: Options object. 357 @param config: Config object. 358 359 @return: Correct staging dir, as a dict mapping directory to date suffix. 360 @raise IOError: If the staging directory cannot be found. 361 """ 362 oneDay = datetime.timedelta(days=1) 363 today = datetime.date.today() 364 yesterday = today - oneDay 365 tomorrow = today + oneDay 366 todayDate = today.strftime(DIR_TIME_FORMAT) 367 yesterdayDate = yesterday.strftime(DIR_TIME_FORMAT) 368 tomorrowDate = tomorrow.strftime(DIR_TIME_FORMAT) 369 todayPath = os.path.join(config.stage.targetDir, todayDate) 370 yesterdayPath = os.path.join(config.stage.targetDir, yesterdayDate) 371 tomorrowPath = os.path.join(config.stage.targetDir, tomorrowDate) 372 todayStageInd = os.path.join(todayPath, STAGE_INDICATOR) 373 yesterdayStageInd = os.path.join(yesterdayPath, STAGE_INDICATOR) 374 tomorrowStageInd = os.path.join(tomorrowPath, STAGE_INDICATOR) 375 todayStoreInd = os.path.join(todayPath, STORE_INDICATOR) 376 yesterdayStoreInd = os.path.join(yesterdayPath, STORE_INDICATOR) 377 tomorrowStoreInd = os.path.join(tomorrowPath, STORE_INDICATOR) 378 if options.full: 379 if os.path.isdir(todayPath) and os.path.exists(todayStageInd): 380 logger.info("Store process will use current day's stage directory [%s]", todayPath) 381 return { todayPath:todayDate } 382 raise IOError("Unable to find staging directory to store (only tried today due to full option).") 383 else: 384 if os.path.isdir(todayPath) and os.path.exists(todayStageInd) and not os.path.exists(todayStoreInd): 385 logger.info("Store process will use current day's stage directory [%s]", todayPath) 386 return { todayPath:todayDate } 387 elif os.path.isdir(yesterdayPath) and os.path.exists(yesterdayStageInd) and not os.path.exists(yesterdayStoreInd): 388 logger.info("Store process will use previous day's stage directory [%s]", yesterdayPath) 389 if config.store.warnMidnite: 390 logger.warn("Warning: store process crossed midnite boundary to find data.") 391 return { yesterdayPath:yesterdayDate } 392 elif os.path.isdir(tomorrowPath) and os.path.exists(tomorrowStageInd) and not os.path.exists(tomorrowStoreInd): 393 logger.info("Store process will use next day's stage directory [%s]", tomorrowPath) 394 if config.store.warnMidnite: 395 logger.warn("Warning: store process crossed midnite boundary to find data.") 396 return { tomorrowPath:tomorrowDate } 397 raise IOError("Unable to find unused staging directory to store (tried today, yesterday, tomorrow).")
    398

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.dvdwriter-module.html0000664000175000017500000002323412642035643030402 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter
    Package CedarBackup2 :: Package writers :: Module dvdwriter
    [hide private]
    [frames] | no frames]

    Module dvdwriter

    source code

    Provides functionality related to DVD writer devices.


    Authors:
    Kenneth J. Pronovici <pronovic@ieee.org>, Dmitry Rutsky <rutsky@inbox.ru>
    Classes [hide private]
      MediaDefinition
    Class encapsulating information about DVD media definitions.
      DvdWriter
    Class representing a device that knows how to write some kinds of DVD media.
      MediaCapacity
    Class encapsulating information about DVD media capacity.
      _ImageProperties
    Simple value object to hold image properties for DvdWriter.
    Variables [hide private]
      MEDIA_DVDPLUSR = 1
    Constant representing DVD+R media.
      MEDIA_DVDPLUSRW = 2
    Constant representing DVD+RW media.
      logger = logging.getLogger("CedarBackup2.log.writers.dvdwriter")
      GROWISOFS_COMMAND = ['growisofs']
      EJECT_COMMAND = ['eject']
      __package__ = 'CedarBackup2.writers'
    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2-module.html0000664000175000017500000000216412642035643025455 0ustar pronovicpronovic00000000000000 CedarBackup2

    Module CedarBackup2


    Variables


    [hide private] CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend.mbox-module.html0000664000175000017500000000716512642035643027715 0ustar pronovicpronovic00000000000000 mbox

    Module mbox


    Classes

    LocalConfig
    MboxConfig
    MboxDir
    MboxFile

    Functions

    executeAction

    Variables

    GREPMAIL_COMMAND
    REVISION_PATH_EXTENSION
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mbox-module.html0000664000175000017500000012742512642035643027134 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mbox
    Package CedarBackup2 :: Package extend :: Module mbox
    [hide private]
    [frames] | no frames]

    Module mbox

    source code

    Provides an extension to back up mbox email files.

    Backing up email

    Email folders (often stored as mbox flatfiles) are not well-suited being backed up with an incremental backup like the one offered by Cedar Backup. This is because mbox files often change on a daily basis, forcing the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large folders. (Note that the alternative maildir format does not share this problem, since it typically uses one file per message.)

    One solution to this problem is to design a smarter incremental backup process, which backs up baseline content on the first day of the week, and then backs up only new messages added to that folder on every other day of the week. This way, the backup for any single day is only as large as the messages placed into the folder on that day. The backup isn't as "perfect" as the incremental backup process, because it doesn't preserve information about messages deleted from the backed-up folder. However, it should be much more space-efficient, and in a recovery situation, it seems better to restore too much data rather than too little.

    What is this extension?

    This is a Cedar Backup extension used to back up mbox email files via the Cedar Backup command line. Individual mbox files or directories containing mbox files can be backed up using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental. It implements the "smart" incremental backup process discussed above, using functionality provided by the grepmail utility.

    This extension requires a new configuration section <mbox> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    The mbox action is conceptually similar to the standard collect action, except that mbox directories are not collected recursively. This implies some configuration changes (i.e. there's no need for global exclusions or an ignore file). If you back up a directory, all of the mbox files in that directory are backed up into a single tar file using the indicated compression method.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      MboxFile
    Class representing mbox file configuration..
      MboxDir
    Class representing mbox directory configuration..
      MboxConfig
    Class representing mbox configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the mbox backup action.
    source code
     
    _getCollectMode(local, item)
    Gets the collect mode that should be used for an mbox file or directory.
    source code
     
    _getCompressMode(local, item)
    Gets the compress mode that should be used for an mbox file or directory.
    source code
     
    _getRevisionPath(config, item)
    Gets the path to the revision file associated with a repository.
    source code
     
    _loadLastRevision(config, item, fullBackup, collectMode)
    Loads the last revision date for this item from disk and returns it.
    source code
     
    _writeNewRevision(config, item, newRevision)
    Writes new revision information to disk.
    source code
     
    _getExclusions(mboxDir)
    Gets exclusions (file and patterns) associated with an mbox directory.
    source code
     
    _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None)
    Gets the backup file path (including correct extension) associated with an mbox path.
    source code
     
    _getTarfilePath(config, mboxPath, compressMode, newRevision)
    Gets the tarfile backup file path (including correct extension) associated with an mbox path.
    source code
     
    _getOutputFile(backupPath, compressMode)
    Opens the output file used for saving backup information.
    source code
     
    _backupMboxFile(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, targetDir=None)
    Backs up an individual mbox file.
    source code
     
    _backupMboxDir(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns)
    Backs up a directory containing mbox files.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.mbox")
      GREPMAIL_COMMAND = ['grepmail']
      REVISION_PATH_EXTENSION = 'mboxlast'
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the mbox backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _getCollectMode(local, item)

    source code 

    Gets the collect mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section.

    Parameters:
    • local - LocalConfig object.
    • item - Mbox file or directory
    Returns:
    Collect mode to use.

    _getCompressMode(local, item)

    source code 

    Gets the compress mode that should be used for an mbox file or directory. Use file- or directory-specific value if possible, otherwise take from mbox section.

    Parameters:
    • local - LocalConfig object.
    • item - Mbox file or directory
    Returns:
    Compress mode to use.

    _getRevisionPath(config, item)

    source code 

    Gets the path to the revision file associated with a repository.

    Parameters:
    • config - Cedar Backup configuration.
    • item - Mbox file or directory
    Returns:
    Absolute path to the revision file associated with the repository.

    _loadLastRevision(config, item, fullBackup, collectMode)

    source code 

    Loads the last revision date for this item from disk and returns it.

    If this is a full backup, or if the revision file cannot be loaded for some reason, then None is returned. This indicates that there is no previous revision, so the entire mail file or directory should be backed up.

    Parameters:
    • config - Cedar Backup configuration.
    • item - Mbox file or directory
    • fullBackup - Indicates whether this is a full backup
    • collectMode - Indicates the collect mode for this item
    Returns:
    Revision date as a datetime.datetime object or None.

    Note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write.

    _writeNewRevision(config, item, newRevision)

    source code 

    Writes new revision information to disk.

    If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception.

    Parameters:
    • config - Cedar Backup configuration.
    • item - Mbox file or directory
    • newRevision - Revision date as a datetime.datetime object.

    Note: We write the actual revision object to disk via pickle, so we don't deal with the datetime precision or format at all. Whatever's in the object is what we write.

    _getExclusions(mboxDir)

    source code 

    Gets exclusions (file and patterns) associated with an mbox directory.

    The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the mbox directory's relative exclude paths.

    The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the mbox directory's list of patterns.

    Parameters:
    • mboxDir - Mbox directory object.
    Returns:
    Tuple (files, patterns) indicating what to exclude.

    _getBackupPath(config, mboxPath, compressMode, newRevision, targetDir=None)

    source code 

    Gets the backup file path (including correct extension) associated with an mbox path.

    We assume that if the target directory is passed in, that we're backing up a directory. Under these circumstances, we'll just use the basename of the individual path as the output file.

    Parameters:
    • config - Cedar Backup configuration.
    • mboxPath - Path to the indicated mbox file or directory
    • compressMode - Compress mode to use for this mbox path
    • newRevision - Revision this backup path represents
    • targetDir - Target directory in which the path should exist
    Returns:
    Absolute path to the backup file associated with the repository.

    Note: The backup path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object.

    _getTarfilePath(config, mboxPath, compressMode, newRevision)

    source code 

    Gets the tarfile backup file path (including correct extension) associated with an mbox path.

    Along with the path, the tar archive mode is returned in a form that can be used with BackupFileList.generateTarfile.

    Parameters:
    • config - Cedar Backup configuration.
    • mboxPath - Path to the indicated mbox file or directory
    • compressMode - Compress mode to use for this mbox path
    • newRevision - Revision this backup path represents
    Returns:
    Tuple of (absolute path to tarfile, tar archive mode)

    Note: The tarfile path only contains the current date in YYYYMMDD format, but that's OK because the index information (stored elsewhere) is the actual date object.

    _getOutputFile(backupPath, compressMode)

    source code 

    Opens the output file used for saving backup information.

    If the compress mode is "gzip", we'll open a GzipFile, and if the compress mode is "bzip2", we'll open a BZ2File. Otherwise, we'll just return an object from the normal open() method.

    Parameters:
    • backupPath - Path to file to open.
    • compressMode - Compress mode of file ("none", "gzip", "bzip").
    Returns:
    Output file object.

    _backupMboxFile(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, targetDir=None)

    source code 

    Backs up an individual mbox file.

    Parameters:
    • config - Cedar Backup configuration.
    • absolutePath - Path to mbox file to back up.
    • fullBackup - Indicates whether this should be a full backup.
    • collectMode - Indicates the collect mode for this item
    • compressMode - Compress mode of file ("none", "gzip", "bzip")
    • lastRevision - Date of last backup as datetime.datetime
    • newRevision - Date of new (current) backup as datetime.datetime
    • targetDir - Target directory to write the backed-up file into
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem backing up the mbox file.

    _backupMboxDir(config, absolutePath, fullBackup, collectMode, compressMode, lastRevision, newRevision, excludePaths, excludePatterns)

    source code 

    Backs up a directory containing mbox files.

    Parameters:
    • config - Cedar Backup configuration.
    • absolutePath - Path to mbox directory to back up.
    • fullBackup - Indicates whether this should be a full backup.
    • collectMode - Indicates the collect mode for this item
    • compressMode - Compress mode of file ("none", "gzip", "bzip")
    • lastRevision - Date of last backup as datetime.datetime
    • newRevision - Date of new (current) backup as datetime.datetime
    • excludePaths - List of absolute paths to exclude.
    • excludePatterns - List of patterns to exclude.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem backing up the mbox file.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.PurgeDir-class.html0000664000175000017500000005561312642035644027506 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PurgeDir
    Package CedarBackup2 :: Module config :: Class PurgeDir
    [hide private]
    [frames] | no frames]

    Class PurgeDir

    source code

    object --+
             |
            PurgeDir
    

    Class representing a Cedar Backup purge directory.

    The following restrictions exist on data in this class:

    • The absolute path must be an absolute path
    • The retain days value must be an integer >= 0.
    Instance Methods [hide private]
     
    __init__(self, absolutePath=None, retainDays=None)
    Constructor for the PurgeDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code
     
    _setRetainDays(self, value)
    Property target used to set the retain days value.
    source code
     
    _getRetainDays(self)
    Property target used to get the absolute path.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      absolutePath
    Absolute path of directory to purge.
      retainDays
    Number of days content within directory should be retained.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, absolutePath=None, retainDays=None)
    (Constructor)

    source code 

    Constructor for the PurgeDir class.

    Parameters:
    • absolutePath - Absolute path of the directory to be purged.
    • retainDays - Number of days content within directory should be retained.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setRetainDays(self, value)

    source code 

    Property target used to set the retain days value. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    absolutePath

    Absolute path of directory to purge.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    retainDays

    Number of days content within directory should be retained.

    Get Method:
    _getRetainDays(self) - Property target used to get the absolute path.
    Set Method:
    _setRetainDays(self, value) - Property target used to set the retain days value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2-pysrc.html0000664000175000017500000002766012642035645024557 0ustar pronovicpronovic00000000000000 CedarBackup2
    Package CedarBackup2
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Cedar Backup, release 2 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Implements local and remote backups to CD or DVD media. 
    24   
    25  Cedar Backup is a software package designed to manage system backups for a pool 
    26  of local and remote machines.  Cedar Backup understands how to back up 
    27  filesystem data as well as MySQL and PostgreSQL databases and Subversion 
    28  repositories.  It can also be easily extended to support other kinds of data 
    29  sources. 
    30   
    31  Cedar Backup is focused around weekly backups to a single CD or DVD disc, with 
    32  the expectation that the disc will be changed or overwritten at the beginning 
    33  of each week.  If your hardware is new enough, Cedar Backup can write 
    34  multisession discs, allowing you to add incremental data to a disc on a daily 
    35  basis. 
    36   
    37  Besides offering command-line utilities to manage the backup process, Cedar 
    38  Backup provides a well-organized library of backup-related functionality, 
    39  written in the Python programming language. 
    40   
    41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    42  """ 
    43   
    44   
    45  ######################################################################## 
    46  # Package initialization 
    47  ######################################################################## 
    48   
    49  # Using 'from CedarBackup2 import *' will just import the modules listed 
    50  # in the __all__ variable. 
    51   
    52  __all__ = [ 'actions', 'cli', 'config', 'extend', 'filesystem', 'knapsack', 
    53              'peer', 'release', 'tools', 'util', 'writers', ] 
    54   
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.mysql.LocalConfig-class.html0000664000175000017500000007540312642035644031332 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.mysql.LocalConfig
    Package CedarBackup2 :: Package extend :: Module mysql :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit MySQL-specific configuration values. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <mysql> configuration section as the next child of a parent.
    source code
     
    _setMysql(self, value)
    Property target used to set the mysql configuration value.
    source code
     
    _getMysql(self)
    Property target used to get the mysql configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseMysql(parentNode)
    Parses a mysql configuration section.
    source code
    Properties [hide private]
      mysql
    Mysql configuration in terms of a MysqlConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object.

    The compress mode must be filled in. Then, if the 'all' flag is set, no databases are allowed, and if the 'all' flag is not set, at least one database is required.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <mysql> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      user           //cb_config/mysql/user
      password       //cb_config/mysql/password
      compressMode   //cb_config/mysql/compress_mode
      all            //cb_config/mysql/all
    

    We also add groups of the following items, one list element per item:

      database       //cb_config/mysql/database
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setMysql(self, value)

    source code 

    Property target used to set the mysql configuration value. If not None, the value must be a MysqlConfig object.

    Raises:
    • ValueError - If the value is not a MysqlConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the mysql configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseMysql(parentNode)
    Static Method

    source code 

    Parses a mysql configuration section.

    We read the following fields:

      user           //cb_config/mysql/user
      password       //cb_config/mysql/password
      compressMode   //cb_config/mysql/compress_mode
      all            //cb_config/mysql/all
    

    We also read groups of the following item, one list element per item:

      databases      //cb_config/mysql/database
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    MysqlConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    Property Details [hide private]

    mysql

    Mysql configuration in terms of a MysqlConfig object.

    Get Method:
    _getMysql(self) - Property target used to get the mysql configuration value.
    Set Method:
    _setMysql(self, value) - Property target used to set the mysql configuration value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.collect-pysrc.html0000664000175000017500000100535712642035645027642 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.collect
    Package CedarBackup2 :: Package actions :: Module collect
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.collect

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2008,2011 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Implements the standard 'collect' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'collect' action. 
     40  @sort: executeCollect 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import os 
     51  import logging 
     52  import pickle 
     53   
     54  # Cedar Backup modules 
     55  from CedarBackup2.filesystem import BackupFileList, FilesystemList 
     56  from CedarBackup2.util import isStartOfWeek, changeOwnership, displayBytes, buildNormalizedPath 
     57  from CedarBackup2.actions.constants import DIGEST_EXTENSION, COLLECT_INDICATOR 
     58  from CedarBackup2.actions.util import writeIndicatorFile 
     59   
     60   
     61  ######################################################################## 
     62  # Module-wide constants and variables 
     63  ######################################################################## 
     64   
     65  logger = logging.getLogger("CedarBackup2.log.actions.collect") 
     66   
     67   
     68  ######################################################################## 
     69  # Public functions 
     70  ######################################################################## 
     71   
     72  ############################ 
     73  # executeCollect() function 
     74  ############################ 
     75   
    
    76 -def executeCollect(configPath, options, config):
    77 """ 78 Executes the collect backup action. 79 80 @note: When the collect action is complete, we will write a collect 81 indicator to the collect directory, so it's obvious that the collect action 82 has completed. The stage process uses this indicator to decide whether a 83 peer is ready to be staged. 84 85 @param configPath: Path to configuration file on disk. 86 @type configPath: String representing a path on disk. 87 88 @param options: Program command-line options. 89 @type options: Options object. 90 91 @param config: Program configuration. 92 @type config: Config object. 93 94 @raise ValueError: Under many generic error conditions 95 @raise TarError: If there is a problem creating a tar file 96 """ 97 logger.debug("Executing the 'collect' action.") 98 if config.options is None or config.collect is None: 99 raise ValueError("Collect configuration is not properly filled in.") 100 if ((config.collect.collectFiles is None or len(config.collect.collectFiles) < 1) and 101 (config.collect.collectDirs is None or len(config.collect.collectDirs) < 1)): 102 raise ValueError("There must be at least one collect file or collect directory.") 103 fullBackup = options.full 104 logger.debug("Full backup flag is [%s]", fullBackup) 105 todayIsStart = isStartOfWeek(config.options.startingDay) 106 resetDigest = fullBackup or todayIsStart 107 logger.debug("Reset digest flag is [%s]", resetDigest) 108 if config.collect.collectFiles is not None: 109 for collectFile in config.collect.collectFiles: 110 logger.debug("Working with collect file [%s]", collectFile.absolutePath) 111 collectMode = _getCollectMode(config, collectFile) 112 archiveMode = _getArchiveMode(config, collectFile) 113 digestPath = _getDigestPath(config, collectFile.absolutePath) 114 tarfilePath = _getTarfilePath(config, collectFile.absolutePath, archiveMode) 115 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 116 logger.debug("File meets criteria to be backed up today.") 117 _collectFile(config, collectFile.absolutePath, tarfilePath, 118 collectMode, archiveMode, resetDigest, digestPath) 119 else: 120 logger.debug("File will not be backed up, per collect mode.") 121 logger.info("Completed collecting file [%s]", collectFile.absolutePath) 122 if config.collect.collectDirs is not None: 123 for collectDir in config.collect.collectDirs: 124 logger.debug("Working with collect directory [%s]", collectDir.absolutePath) 125 collectMode = _getCollectMode(config, collectDir) 126 archiveMode = _getArchiveMode(config, collectDir) 127 ignoreFile = _getIgnoreFile(config, collectDir) 128 linkDepth = _getLinkDepth(collectDir) 129 dereference = _getDereference(collectDir) 130 recursionLevel = _getRecursionLevel(collectDir) 131 (excludePaths, excludePatterns) = _getExclusions(config, collectDir) 132 if fullBackup or (collectMode in ['daily', 'incr', ]) or (collectMode == 'weekly' and todayIsStart): 133 logger.debug("Directory meets criteria to be backed up today.") 134 _collectDirectory(config, collectDir.absolutePath, 135 collectMode, archiveMode, ignoreFile, linkDepth, dereference, 136 resetDigest, excludePaths, excludePatterns, recursionLevel) 137 else: 138 logger.debug("Directory will not be backed up, per collect mode.") 139 logger.info("Completed collecting directory [%s]", collectDir.absolutePath) 140 writeIndicatorFile(config.collect.targetDir, COLLECT_INDICATOR, 141 config.options.backupUser, config.options.backupGroup) 142 logger.info("Executed the 'collect' action successfully.")
    143 144 145 ######################################################################## 146 # Private utility functions 147 ######################################################################## 148 149 ########################## 150 # _collectFile() function 151 ########################## 152
    153 -def _collectFile(config, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath):
    154 """ 155 Collects a configured collect file. 156 157 The indicated collect file is collected into the indicated tarfile. 158 For files that are collected incrementally, we'll use the indicated 159 digest path and pay attention to the reset digest flag (basically, the reset 160 digest flag ignores any existing digest, but a new digest is always 161 rewritten). 162 163 The caller must decide what the collect and archive modes are, since they 164 can be on both the collect configuration and the collect file itself. 165 166 @param config: Config object. 167 @param absolutePath: Absolute path of file to collect. 168 @param tarfilePath: Path to tarfile that should be created. 169 @param collectMode: Collect mode to use. 170 @param archiveMode: Archive mode to use. 171 @param resetDigest: Reset digest flag. 172 @param digestPath: Path to digest file on disk, if needed. 173 """ 174 backupList = BackupFileList() 175 backupList.addFile(absolutePath) 176 _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath)
    177 178 179 ############################### 180 # _collectDirectory() function 181 ############################### 182
    183 -def _collectDirectory(config, absolutePath, collectMode, archiveMode, 184 ignoreFile, linkDepth, dereference, resetDigest, 185 excludePaths, excludePatterns, recursionLevel):
    186 """ 187 Collects a configured collect directory. 188 189 The indicated collect directory is collected into the indicated tarfile. 190 For directories that are collected incrementally, we'll use the indicated 191 digest path and pay attention to the reset digest flag (basically, the reset 192 digest flag ignores any existing digest, but a new digest is always 193 rewritten). 194 195 The caller must decide what the collect and archive modes are, since they 196 can be on both the collect configuration and the collect directory itself. 197 198 @param config: Config object. 199 @param absolutePath: Absolute path of directory to collect. 200 @param collectMode: Collect mode to use. 201 @param archiveMode: Archive mode to use. 202 @param ignoreFile: Ignore file to use. 203 @param linkDepth: Link depth value to use. 204 @param dereference: Dereference flag to use. 205 @param resetDigest: Reset digest flag. 206 @param excludePaths: List of absolute paths to exclude. 207 @param excludePatterns: List of patterns to exclude. 208 @param recursionLevel: Recursion level (zero for no recursion) 209 """ 210 if recursionLevel == 0: 211 # Collect the actual directory because we're at recursion level 0 212 logger.info("Collecting directory [%s]", absolutePath) 213 tarfilePath = _getTarfilePath(config, absolutePath, archiveMode) 214 digestPath = _getDigestPath(config, absolutePath) 215 216 backupList = BackupFileList() 217 backupList.ignoreFile = ignoreFile 218 backupList.excludePaths = excludePaths 219 backupList.excludePatterns = excludePatterns 220 backupList.addDirContents(absolutePath, linkDepth=linkDepth, dereference=dereference) 221 222 _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath) 223 else: 224 # Find all of the immediate subdirectories 225 subdirs = FilesystemList() 226 subdirs.excludeFiles = True 227 subdirs.excludeLinks = True 228 subdirs.excludePaths = excludePaths 229 subdirs.excludePatterns = excludePatterns 230 subdirs.addDirContents(path=absolutePath, recursive=False, addSelf=False) 231 232 # Back up the subdirectories separately 233 for subdir in subdirs: 234 _collectDirectory(config, subdir, collectMode, archiveMode, 235 ignoreFile, linkDepth, dereference, resetDigest, 236 excludePaths, excludePatterns, recursionLevel-1) 237 excludePaths.append(subdir) # this directory is already backed up, so exclude it 238 239 # Back up everything that hasn't previously been backed up 240 _collectDirectory(config, absolutePath, collectMode, archiveMode, 241 ignoreFile, linkDepth, dereference, resetDigest, 242 excludePaths, excludePatterns, 0)
    243 244 245 ############################ 246 # _executeBackup() function 247 ############################ 248
    249 -def _executeBackup(config, backupList, absolutePath, tarfilePath, collectMode, archiveMode, resetDigest, digestPath):
    250 """ 251 Execute the backup process for the indicated backup list. 252 253 This function exists mainly to consolidate functionality between the 254 L{_collectFile} and L{_collectDirectory} functions. Those functions build 255 the backup list; this function causes the backup to execute properly and 256 also manages usage of the digest file on disk as explained in their 257 comments. 258 259 For collect files, the digest file will always just contain the single file 260 that is being backed up. This might little wasteful in terms of the number 261 of files that we keep around, but it's consistent and easy to understand. 262 263 @param config: Config object. 264 @param backupList: List to execute backup for 265 @param absolutePath: Absolute path of directory or file to collect. 266 @param tarfilePath: Path to tarfile that should be created. 267 @param collectMode: Collect mode to use. 268 @param archiveMode: Archive mode to use. 269 @param resetDigest: Reset digest flag. 270 @param digestPath: Path to digest file on disk, if needed. 271 """ 272 if collectMode != 'incr': 273 logger.debug("Collect mode is [%s]; no digest will be used.", collectMode) 274 if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file 275 logger.info("Backing up file [%s] (%s).", absolutePath, displayBytes(backupList.totalSize())) 276 else: 277 logger.info("Backing up %d files in [%s] (%s).", len(backupList), absolutePath, displayBytes(backupList.totalSize())) 278 if len(backupList) > 0: 279 backupList.generateTarfile(tarfilePath, archiveMode, True) 280 changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) 281 else: 282 if resetDigest: 283 logger.debug("Based on resetDigest flag, digest will be cleared.") 284 oldDigest = {} 285 else: 286 logger.debug("Based on resetDigest flag, digest will loaded from disk.") 287 oldDigest = _loadDigest(digestPath) 288 (removed, newDigest) = backupList.removeUnchanged(oldDigest, captureDigest=True) 289 logger.debug("Removed %d unchanged files based on digest values.", removed) 290 if len(backupList) == 1 and backupList[0] == absolutePath: # special case for individual file 291 logger.info("Backing up file [%s] (%s).", absolutePath, displayBytes(backupList.totalSize())) 292 else: 293 logger.info("Backing up %d files in [%s] (%s).", len(backupList), absolutePath, displayBytes(backupList.totalSize())) 294 if len(backupList) > 0: 295 backupList.generateTarfile(tarfilePath, archiveMode, True) 296 changeOwnership(tarfilePath, config.options.backupUser, config.options.backupGroup) 297 _writeDigest(config, newDigest, digestPath)
    298 299 300 ######################### 301 # _loadDigest() function 302 ######################### 303
    304 -def _loadDigest(digestPath):
    305 """ 306 Loads the indicated digest path from disk into a dictionary. 307 308 If we can't load the digest successfully (either because it doesn't exist or 309 for some other reason), then an empty dictionary will be returned - but the 310 condition will be logged. 311 312 @param digestPath: Path to the digest file on disk. 313 314 @return: Dictionary representing contents of digest path. 315 """ 316 if not os.path.isfile(digestPath): 317 digest = {} 318 logger.debug("Digest [%s] does not exist on disk.", digestPath) 319 else: 320 try: 321 digest = pickle.load(open(digestPath, "r")) 322 logger.debug("Loaded digest [%s] from disk: %d entries.", digestPath, len(digest)) 323 except: 324 digest = {} 325 logger.error("Failed loading digest [%s] from disk.", digestPath) 326 return digest
    327 328 329 ########################## 330 # _writeDigest() function 331 ########################## 332
    333 -def _writeDigest(config, digest, digestPath):
    334 """ 335 Writes the digest dictionary to the indicated digest path on disk. 336 337 If we can't write the digest successfully for any reason, we'll log the 338 condition but won't throw an exception. 339 340 @param config: Config object. 341 @param digest: Digest dictionary to write to disk. 342 @param digestPath: Path to the digest file on disk. 343 """ 344 try: 345 pickle.dump(digest, open(digestPath, "w")) 346 changeOwnership(digestPath, config.options.backupUser, config.options.backupGroup) 347 logger.debug("Wrote new digest [%s] to disk: %d entries.", digestPath, len(digest)) 348 except: 349 logger.error("Failed to write digest [%s] to disk.", digestPath)
    350 351 352 ######################################################################## 353 # Private attribute "getter" functions 354 ######################################################################## 355 356 ############################ 357 # getCollectMode() function 358 ############################ 359
    360 -def _getCollectMode(config, item):
    361 """ 362 Gets the collect mode that should be used for a collect directory or file. 363 If possible, use the one on the file or directory, otherwise take from collect section. 364 @param config: Config object. 365 @param item: C{CollectFile} or C{CollectDir} object 366 @return: Collect mode to use. 367 """ 368 if item.collectMode is None: 369 collectMode = config.collect.collectMode 370 else: 371 collectMode = item.collectMode 372 logger.debug("Collect mode is [%s]", collectMode) 373 return collectMode
    374 375 376 ############################# 377 # _getArchiveMode() function 378 ############################# 379
    380 -def _getArchiveMode(config, item):
    381 """ 382 Gets the archive mode that should be used for a collect directory or file. 383 If possible, use the one on the file or directory, otherwise take from collect section. 384 @param config: Config object. 385 @param item: C{CollectFile} or C{CollectDir} object 386 @return: Archive mode to use. 387 """ 388 if item.archiveMode is None: 389 archiveMode = config.collect.archiveMode 390 else: 391 archiveMode = item.archiveMode 392 logger.debug("Archive mode is [%s]", archiveMode) 393 return archiveMode
    394 395 396 ############################ 397 # _getIgnoreFile() function 398 ############################ 399
    400 -def _getIgnoreFile(config, item):
    401 """ 402 Gets the ignore file that should be used for a collect directory or file. 403 If possible, use the one on the file or directory, otherwise take from collect section. 404 @param config: Config object. 405 @param item: C{CollectFile} or C{CollectDir} object 406 @return: Ignore file to use. 407 """ 408 if item.ignoreFile is None: 409 ignoreFile = config.collect.ignoreFile 410 else: 411 ignoreFile = item.ignoreFile 412 logger.debug("Ignore file is [%s]", ignoreFile) 413 return ignoreFile
    414 415 416 ############################ 417 # _getLinkDepth() function 418 ############################ 419
    420 -def _getLinkDepth(item):
    421 """ 422 Gets the link depth that should be used for a collect directory. 423 If possible, use the one on the directory, otherwise set a value of 0 (zero). 424 @param item: C{CollectDir} object 425 @return: Link depth to use. 426 """ 427 if item.linkDepth is None: 428 linkDepth = 0 429 else: 430 linkDepth = item.linkDepth 431 logger.debug("Link depth is [%d]", linkDepth) 432 return linkDepth
    433 434 435 ############################ 436 # _getDereference() function 437 ############################ 438
    439 -def _getDereference(item):
    440 """ 441 Gets the dereference flag that should be used for a collect directory. 442 If possible, use the one on the directory, otherwise set a value of False. 443 @param item: C{CollectDir} object 444 @return: Dereference flag to use. 445 """ 446 if item.dereference is None: 447 dereference = False 448 else: 449 dereference = item.dereference 450 logger.debug("Dereference flag is [%s]", dereference) 451 return dereference
    452 453 454 ################################ 455 # _getRecursionLevel() function 456 ################################ 457
    458 -def _getRecursionLevel(item):
    459 """ 460 Gets the recursion level that should be used for a collect directory. 461 If possible, use the one on the directory, otherwise set a value of 0 (zero). 462 @param item: C{CollectDir} object 463 @return: Recursion level to use. 464 """ 465 if item.recursionLevel is None: 466 recursionLevel = 0 467 else: 468 recursionLevel = item.recursionLevel 469 logger.debug("Recursion level is [%d]", recursionLevel) 470 return recursionLevel
    471 472 473 ############################ 474 # _getDigestPath() function 475 ############################ 476
    477 -def _getDigestPath(config, absolutePath):
    478 """ 479 Gets the digest path associated with a collect directory or file. 480 @param config: Config object. 481 @param absolutePath: Absolute path to generate digest for 482 @return: Absolute path to the digest associated with the collect directory or file. 483 """ 484 normalized = buildNormalizedPath(absolutePath) 485 filename = "%s.%s" % (normalized, DIGEST_EXTENSION) 486 digestPath = os.path.join(config.options.workingDir, filename) 487 logger.debug("Digest path is [%s]", digestPath) 488 return digestPath
    489 490 491 ############################# 492 # _getTarfilePath() function 493 ############################# 494
    495 -def _getTarfilePath(config, absolutePath, archiveMode):
    496 """ 497 Gets the tarfile path (including correct extension) associated with a collect directory. 498 @param config: Config object. 499 @param absolutePath: Absolute path to generate tarfile for 500 @param archiveMode: Archive mode to use for this tarfile. 501 @return: Absolute path to the tarfile associated with the collect directory. 502 """ 503 if archiveMode == 'tar': 504 extension = "tar" 505 elif archiveMode == 'targz': 506 extension = "tar.gz" 507 elif archiveMode == 'tarbz2': 508 extension = "tar.bz2" 509 normalized = buildNormalizedPath(absolutePath) 510 filename = "%s.%s" % (normalized, extension) 511 tarfilePath = os.path.join(config.collect.targetDir, filename) 512 logger.debug("Tarfile path is [%s]", tarfilePath) 513 return tarfilePath
    514 515 516 ############################ 517 # _getExclusions() function 518 ############################ 519
    520 -def _getExclusions(config, collectDir):
    521 """ 522 Gets exclusions (file and patterns) associated with a collect directory. 523 524 The returned files value is a list of absolute paths to be excluded from the 525 backup for a given directory. It is derived from the collect configuration 526 absolute exclude paths and the collect directory's absolute and relative 527 exclude paths. 528 529 The returned patterns value is a list of patterns to be excluded from the 530 backup for a given directory. It is derived from the list of patterns from 531 the collect configuration and from the collect directory itself. 532 533 @param config: Config object. 534 @param collectDir: Collect directory object. 535 536 @return: Tuple (files, patterns) indicating what to exclude. 537 """ 538 paths = [] 539 if config.collect.absoluteExcludePaths is not None: 540 paths.extend(config.collect.absoluteExcludePaths) 541 if collectDir.absoluteExcludePaths is not None: 542 paths.extend(collectDir.absoluteExcludePaths) 543 if collectDir.relativeExcludePaths is not None: 544 for relativePath in collectDir.relativeExcludePaths: 545 paths.append(os.path.join(collectDir.absolutePath, relativePath)) 546 patterns = [] 547 if config.collect.excludePatterns is not None: 548 patterns.extend(config.collect.excludePatterns) 549 if collectDir.excludePatterns is not None: 550 patterns.extend(collectDir.excludePatterns) 551 logger.debug("Exclude paths: %s", paths) 552 logger.debug("Exclude patterns: %s", patterns) 553 return(paths, patterns)
    554

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.subversion-module.html0000664000175000017500000013722412642035643030364 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion
    Package CedarBackup2 :: Package extend :: Module subversion
    [hide private]
    [frames] | no frames]

    Module subversion

    source code

    Provides an extension to back up Subversion repositories.

    This is a Cedar Backup extension used to back up Subversion repositories via the Cedar Backup command line. Each Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action: weekly, daily, incremental.

    This extension requires a new configuration section <subversion> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). Although the repository type can be specified in configuration, that information is just kept around for reference. It doesn't affect the backup. Both kinds of repositories are backed up in the same way, using svnadmin dump in an incremental mode.

    It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do that, then use the normal collect action. This is probably simpler, although it carries its own advantages and disadvantages (plus you will have to be careful to exclude the working directories Subversion uses when building an update to commit). Check the Subversion documentation for more information.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      RepositoryDir
    Class representing Subversion repository directory.
      Repository
    Class representing generic Subversion repository configuration..
      SubversionConfig
    Class representing Subversion configuration.
      LocalConfig
    Class representing this extension's configuration document.
      BDBRepository
    Class representing Subversion BDB (Berkeley Database) repository configuration.
      FSFSRepository
    Class representing Subversion FSFS repository configuration.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the Subversion backup action.
    source code
     
    _getCollectMode(local, repository)
    Gets the collect mode that should be used for a repository.
    source code
     
    _getCompressMode(local, repository)
    Gets the compress mode that should be used for a repository.
    source code
     
    _getRevisionPath(config, repository)
    Gets the path to the revision file associated with a repository.
    source code
     
    _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision)
    Gets the backup file path (including correct extension) associated with a repository.
    source code
     
    _getRepositoryPaths(repositoryDir)
    Gets a list of child repository paths within a repository directory.
    source code
     
    _getExclusions(repositoryDir)
    Gets exclusions (file and patterns) associated with an repository directory.
    source code
     
    _backupRepository(config, local, todayIsStart, fullBackup, repository)
    Backs up an individual Subversion repository.
    source code
     
    _getOutputFile(backupPath, compressMode)
    Opens the output file used for saving the Subversion dump.
    source code
     
    _loadLastRevision(revisionPath)
    Loads the indicated revision file from disk into an integer.
    source code
     
    _writeLastRevision(config, revisionPath, endRevision)
    Writes the end revision to the indicated revision file on disk.
    source code
     
    backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)
    Backs up an individual Subversion repository.
    source code
     
    getYoungestRevision(repositoryPath)
    Gets the youngest (newest) revision in a Subversion repository using svnlook.
    source code
     
    backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)
    Backs up an individual Subversion BDB repository.
    source code
     
    backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)
    Backs up an individual Subversion FSFS repository.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.subversion")
      SVNLOOK_COMMAND = ['svnlook']
      SVNADMIN_COMMAND = ['svnadmin']
      REVISION_PATH_EXTENSION = 'svnlast'
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the Subversion backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _getCollectMode(local, repository)

    source code 

    Gets the collect mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section.

    Parameters:
    • repository - Repository object.
    Returns:
    Collect mode to use.

    _getCompressMode(local, repository)

    source code 

    Gets the compress mode that should be used for a repository. Use repository's if possible, otherwise take from subversion section.

    Parameters:
    • local - LocalConfig object.
    • repository - Repository object.
    Returns:
    Compress mode to use.

    _getRevisionPath(config, repository)

    source code 

    Gets the path to the revision file associated with a repository.

    Parameters:
    • config - Config object.
    • repository - Repository object.
    Returns:
    Absolute path to the revision file associated with the repository.

    _getBackupPath(config, repositoryPath, compressMode, startRevision, endRevision)

    source code 

    Gets the backup file path (including correct extension) associated with a repository.

    Parameters:
    • config - Config object.
    • repositoryPath - Path to the indicated repository
    • compressMode - Compress mode to use for this repository.
    • startRevision - Starting repository revision.
    • endRevision - Ending repository revision.
    Returns:
    Absolute path to the backup file associated with the repository.

    _getRepositoryPaths(repositoryDir)

    source code 

    Gets a list of child repository paths within a repository directory.

    Parameters:
    • repositoryDir - RepositoryDirectory

    _getExclusions(repositoryDir)

    source code 

    Gets exclusions (file and patterns) associated with an repository directory.

    The returned files value is a list of absolute paths to be excluded from the backup for a given directory. It is derived from the repository directory's relative exclude paths.

    The returned patterns value is a list of patterns to be excluded from the backup for a given directory. It is derived from the repository directory's list of patterns.

    Parameters:
    • repositoryDir - Repository directory object.
    Returns:
    Tuple (files, patterns) indicating what to exclude.

    _backupRepository(config, local, todayIsStart, fullBackup, repository)

    source code 

    Backs up an individual Subversion repository.

    This internal method wraps the public methods and adds some functionality to work better with the extended action itself.

    Parameters:
    • config - Cedar Backup configuration.
    • local - Local configuration
    • todayIsStart - Indicates whether today is start of week
    • fullBackup - Full backup flag
    • repository - Repository to operate on
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the Subversion dump.

    _getOutputFile(backupPath, compressMode)

    source code 

    Opens the output file used for saving the Subversion dump.

    If the compress mode is "gzip", we'll open a GzipFile, and if the compress mode is "bzip2", we'll open a BZ2File. Otherwise, we'll just return an object from the normal open() method.

    Parameters:
    • backupPath - Path to file to open.
    • compressMode - Compress mode of file ("none", "gzip", "bzip").
    Returns:
    Output file object.

    _loadLastRevision(revisionPath)

    source code 

    Loads the indicated revision file from disk into an integer.

    If we can't load the revision file successfully (either because it doesn't exist or for some other reason), then a revision of -1 will be returned - but the condition will be logged. This way, we err on the side of backing up too much, because anyone using this will presumably be adding 1 to the revision, so they don't duplicate any backups.

    Parameters:
    • revisionPath - Path to the revision file on disk.
    Returns:
    Integer representing last backed-up revision, -1 on error or if none can be read.

    _writeLastRevision(config, revisionPath, endRevision)

    source code 

    Writes the end revision to the indicated revision file on disk.

    If we can't write the revision file successfully for any reason, we'll log the condition but won't throw an exception.

    Parameters:
    • config - Config object.
    • revisionPath - Path to the revision file on disk.
    • endRevision - Last revision backed up on this run.

    backupRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)

    source code 

    Backs up an individual Subversion repository.

    The starting and ending revision values control an incremental backup. If the starting revision is not passed in, then revision zero (the start of the repository) is assumed. If the ending revision is not passed in, then the youngest revision in the database will be used as the endpoint.

    The backup data will be written into the passed-in back file. Normally, this would be an object as returned from open, but it is possible to use something like a GzipFile to write compressed output. The caller is responsible for closing the passed-in backup file.

    Parameters:
    • repositoryPath (String path representing Subversion repository on disk.) - Path to Subversion repository to back up
    • backupFile (Python file object as from open() or file().) - Python file object to use for writing backup.
    • startRevision (Integer value >= 0.) - Starting repository revision to back up (for incremental backups)
    • endRevision (Integer value >= 0.) - Ending repository revision to back up (for incremental backups)
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the Subversion dump.
    Notes:
    • This function should either be run as root or as the owner of the Subversion repository.
    • It is apparently not a good idea to interrupt this function. Sometimes, this leaves the repository in a "wedged" state, which requires recovery using svnadmin recover.

    getYoungestRevision(repositoryPath)

    source code 

    Gets the youngest (newest) revision in a Subversion repository using svnlook.

    Parameters:
    • repositoryPath (String path representing Subversion repository on disk.) - Path to Subversion repository to look in.
    Returns:
    Youngest revision as an integer.
    Raises:
    • ValueError - If there is a problem parsing the svnlook output.
    • IOError - If there is a problem executing the svnlook command.

    Note: This function should either be run as root or as the owner of the Subversion repository.

    backupBDBRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)

    source code 

    Backs up an individual Subversion BDB repository. This function is deprecated. Use backupRepository instead.

    backupFSFSRepository(repositoryPath, backupFile, startRevision=None, endRevision=None)

    source code 

    Backs up an individual Subversion FSFS repository. This function is deprecated. Use backupRepository instead.


    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.CommandOverride-class.html0000664000175000017500000005614112642035643031037 0ustar pronovicpronovic00000000000000 CedarBackup2.config.CommandOverride
    Package CedarBackup2 :: Module config :: Class CommandOverride
    [hide private]
    [frames] | no frames]

    Class CommandOverride

    source code

    object --+
             |
            CommandOverride
    

    Class representing a piece of Cedar Backup command override configuration.

    The following restrictions exist on data in this class:

    • The absolute path must be absolute

    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, command=None, absolutePath=None)
    Constructor for the CommandOverride class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setCommand(self, value)
    Property target used to set the command.
    source code
     
    _getCommand(self)
    Property target used to get the command.
    source code
     
    _setAbsolutePath(self, value)
    Property target used to set the absolute path.
    source code
     
    _getAbsolutePath(self)
    Property target used to get the absolute path.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      command
    Name of command to be overridden.
      absolutePath
    Absolute path of the overrridden command.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, command=None, absolutePath=None)
    (Constructor)

    source code 

    Constructor for the CommandOverride class.

    Parameters:
    • command - Name of command to be overridden.
    • absolutePath - Absolute path of the overrridden command.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCommand(self, value)

    source code 

    Property target used to set the command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setAbsolutePath(self, value)

    source code 

    Property target used to set the absolute path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    Property Details [hide private]

    command

    Name of command to be overridden.

    Get Method:
    _getCommand(self) - Property target used to get the command.
    Set Method:
    _setCommand(self, value) - Property target used to set the command.

    absolutePath

    Absolute path of the overrridden command.

    Get Method:
    _getAbsolutePath(self) - Property target used to get the absolute path.
    Set Method:
    _setAbsolutePath(self, value) - Property target used to set the absolute path.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.ActionHook-class.html0000664000175000017500000006453112642035643030021 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ActionHook
    Package CedarBackup2 :: Module config :: Class ActionHook
    [hide private]
    [frames] | no frames]

    Class ActionHook

    source code

    object --+
             |
            ActionHook
    
    Known Subclasses:

    Class representing a hook associated with an action.

    A hook associated with an action is a shell command to be executed either before or after a named action is executed.

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string matching ACTION_NAME_REGEX
    • The shell command must be a non-empty string.

    The internal before and after instance variables are always set to False in this parent class.

    Instance Methods [hide private]
     
    __init__(self, action=None, command=None)
    Constructor for the ActionHook class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setAction(self, value)
    Property target used to set the action name.
    source code
     
    _getAction(self)
    Property target used to get the action name.
    source code
     
    _setCommand(self, value)
    Property target used to set the command.
    source code
     
    _getCommand(self)
    Property target used to get the command.
    source code
     
    _getBefore(self)
    Property target used to get the before flag.
    source code
     
    _getAfter(self)
    Property target used to get the after flag.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      action
    Action this hook is associated with.
      command
    Shell command to execute.
      before
    Indicates whether command should be executed before action.
      after
    Indicates whether command should be executed after action.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, action=None, command=None)
    (Constructor)

    source code 

    Constructor for the ActionHook class.

    Parameters:
    • action - Action this hook is associated with
    • command - Shell command to execute
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setAction(self, value)

    source code 

    Property target used to set the action name. The value must be a non-empty string if it is not None. It must also consist only of lower-case letters and digits.

    Raises:
    • ValueError - If the value is an empty string.

    _setCommand(self, value)

    source code 

    Property target used to set the command. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    Property Details [hide private]

    action

    Action this hook is associated with.

    Get Method:
    _getAction(self) - Property target used to get the action name.
    Set Method:
    _setAction(self, value) - Property target used to set the action name.

    command

    Shell command to execute.

    Get Method:
    _getCommand(self) - Property target used to get the command.
    Set Method:
    _setCommand(self, value) - Property target used to set the command.

    before

    Indicates whether command should be executed before action.

    Get Method:
    _getBefore(self) - Property target used to get the before flag.

    after

    Indicates whether command should be executed after action.

    Get Method:
    _getAfter(self) - Property target used to get the after flag.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.peer.RemotePeer-class.html0000664000175000017500000027044212642035644027521 0ustar pronovicpronovic00000000000000 CedarBackup2.peer.RemotePeer
    Package CedarBackup2 :: Module peer :: Class RemotePeer
    [hide private]
    [frames] | no frames]

    Class RemotePeer

    source code

    object --+
             |
            RemotePeer
    

    Backup peer representing a remote peer in a backup pool.

    This is a class representing a remote (networked) peer in a backup pool. Remote peers are backed up using an rcp-compatible copy command. A remote peer has associated with it a name (which must be a valid hostname), a collect directory, a working directory and a copy method (an rcp-compatible command).

    You can also set an optional local user value. This username will be used as the local user for any remote copies that are required. It can only be used if the root user is executing the backup. The root user will su to the local user and execute the remote copies as that user.

    The copy method is associated with the peer and not with the actual request to copy, because we can envision that each remote host might have a different connect method.

    The public methods other than the constructor are part of a "backup peer" interface shared with the LocalPeer class.

    Instance Methods [hide private]
     
    __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, ignoreFailureMode=None)
    Initializes a remote backup peer.
    source code
     
    stagePeer(self, targetDir, ownership=None, permissions=None)
    Stages data from the peer into the indicated local target directory.
    source code
     
    checkCollectIndicator(self, collectIndicator=None)
    Checks the collect indicator in the peer's staging directory.
    source code
     
    writeStageIndicator(self, stageIndicator=None)
    Writes the stage indicator in the peer's staging directory.
    source code
     
    executeRemoteCommand(self, command)
    Executes a command on the peer via remote shell.
    source code
     
    executeManagedAction(self, action, fullBackup)
    Executes a managed action on this peer.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setWorkingDir(self, value)
    Property target used to set the working directory.
    source code
     
    _getWorkingDir(self)
    Property target used to get the working directory.
    source code
     
    _setRemoteUser(self, value)
    Property target used to set the remote user.
    source code
     
    _getRemoteUser(self)
    Property target used to get the remote user.
    source code
     
    _setLocalUser(self, value)
    Property target used to set the local user.
    source code
     
    _getLocalUser(self)
    Property target used to get the local user.
    source code
     
    _setRcpCommand(self, value)
    Property target to set the rcp command.
    source code
     
    _getRcpCommand(self)
    Property target used to get the rcp command.
    source code
     
    _setRshCommand(self, value)
    Property target to set the rsh command.
    source code
     
    _getRshCommand(self)
    Property target used to get the rsh command.
    source code
     
    _setCbackCommand(self, value)
    Property target to set the cback command.
    source code
     
    _getCbackCommand(self)
    Property target used to get the cback command.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _getDirContents(path)
    Returns the contents of a directory in terms of a Set.
    source code
     
    _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceDir, targetDir, ownership=None, permissions=None)
    Copies files from the source directory to the target directory.
    source code
     
    _copyRemoteFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, ownership=None, permissions=None, overwrite=True)
    Copies a remote source file to a target file.
    source code
     
    _pushLocalFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, overwrite=True)
    Copies a local source file to a remote host.
    source code
     
    _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand)
    Executes a command on the peer via remote shell.
    source code
     
    _buildCbackCommand(cbackCommand, action, fullBackup)
    Builds a Cedar Backup command line for the named action.
    source code
    Properties [hide private]
      name
    Name of the peer (a valid DNS hostname).
      collectDir
    Path to the peer's collect directory (an absolute local path).
      remoteUser
    Name of the Cedar Backup user on the remote peer.
      rcpCommand
    An rcp-compatible copy command to use for copying files.
      rshCommand
    An rsh-compatible command to use for remote shells to the peer.
      cbackCommand
    A chack-compatible command to use for executing managed actions.
      workingDir
    Path to the peer's working directory (an absolute local path).
      localUser
    Name of the Cedar Backup user on the current host.
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name=None, collectDir=None, workingDir=None, remoteUser=None, rcpCommand=None, localUser=None, rshCommand=None, cbackCommand=None, ignoreFailureMode=None)
    (Constructor)

    source code 

    Initializes a remote backup peer.

    Parameters:
    • name (String, must be a valid DNS hostname) - Name of the backup peer
    • collectDir (String representing an absolute path on the remote peer) - Path to the peer's collect directory
    • workingDir (String representing an absolute path on the current host.) - Working directory that can be used to create temporary files, etc.
    • remoteUser (String representing a username, valid via remote shell to the peer) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rshCommand (String representing a system command including required arguments) - An rsh-compatible copy command to use for remote shells to the peer
    • cbackCommand (String representing a system command including required arguments) - A chack-compatible command to use for executing managed actions
    • ignoreFailureMode (One of VALID_FAILURE_MODES) - Ignore failure mode for this peer
    Raises:
    • ValueError - If collect directory is not an absolute path
    Overrides: object.__init__

    Note: If provided, each command will eventually be parsed into a list of strings suitable for passing to util.executeCommand in order to avoid security holes related to shell interpolation. This parsing will be done by the util.splitCommandLine function. See the documentation for that function for some important notes about its limitations.

    stagePeer(self, targetDir, ownership=None, permissions=None)

    source code 

    Stages data from the peer into the indicated local target directory.

    The target directory must already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied.

    Parameters:
    • targetDir (String representing a directory on disk) - Target directory to write data into
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the staged files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If target directory is not a directory, does not exist or is not absolute.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there were no files to stage (i.e. the directory was empty)
    • IOError - If there is an IO error copying a file.
    • OSError - If there is an OS error copying or changing permissions on a file
    Notes:
    • The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it.
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • Unlike the local peer version of this method, an I/O error might or might not be raised if the directory is empty. Since we're using a remote copy method, we just don't have the fine-grained control over our exceptions that's available when we can look directly at the filesystem, and we can't control whether the remote copy method thinks an empty directory is an error.

    checkCollectIndicator(self, collectIndicator=None)

    source code 

    Checks the collect indicator in the peer's staging directory.

    When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. If the remote copy command fails, we return False as if the file weren't there.

    If you need to, you can override the name of the collect indicator file by passing in a different name.

    Parameters:
    • collectIndicator (String representing name of a file in the collect directory) - Name of the collect indicator file to check
    Returns:
    Boolean true/false depending on whether the indicator exists.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    Note: Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the scp command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. Because of this, the implementation of this method is rather convoluted.

    writeStageIndicator(self, stageIndicator=None)

    source code 

    Writes the stage indicator in the peer's staging directory.

    When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete.

    If you need to, you can override the name of the stage indicator file by passing in a different name.

    Parameters:
    • stageIndicator (String representing name of a file in the collect directory) - Name of the indicator file to write
    Raises:
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there is an IO error creating the file.
    • OSError - If there is an OS error creating or changing permissions on the file

    Note: If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    executeRemoteCommand(self, command)

    source code 

    Executes a command on the peer via remote shell.

    Parameters:
    • command (String command-line suitable for use with rsh.) - Command to execute
    Raises:
    • IOError - If there is an error executing the command on the remote peer.

    executeManagedAction(self, action, fullBackup)

    source code 

    Executes a managed action on this peer.

    Parameters:
    • action - Name of the action to execute.
    • fullBackup - Whether a full backup should be executed.
    Raises:
    • IOError - If there is an error executing the action on the remote peer.

    _getDirContents(path)
    Static Method

    source code 

    Returns the contents of a directory in terms of a Set.

    The directory's contents are read as a FilesystemList containing only files, and then the list is converted into a set object for later use.

    Parameters:
    • path (String representing a path on disk) - Directory path to get contents for
    Returns:
    Set of files in the directory
    Raises:
    • ValueError - If path is not a directory or does not exist.

    _copyRemoteDir(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceDir, targetDir, ownership=None, permissions=None)
    Static Method

    source code 

    Copies files from the source directory to the target directory.

    This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. Behavior when copying soft links from the collect directory is dependent on the behavior of the specified rcp command.

    Parameters:
    • remoteUser (String representing a username, valid via the copy command) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rcpCommandList (Command as a list to be passed to util.executeCommand) - An rcp-compatible copy command to use for copying files
    • sourceDir (String representing a directory on disk) - Source directory
    • targetDir (String representing a directory on disk) - Target directory
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If source or target is not a directory or does not exist.
    • IOError - If there is an IO error copying the files.
    Notes:
    • The returned count of copied files might be inaccurate if some of the copied files already existed in the staging directory prior to the copy taking place. We don't clear the staging directory first, because some extension might also be using it.
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • We don't have a good way of knowing exactly what files we copied down from the remote peer, unless we want to parse the output of the rcp command (ugh). We could change permissions on everything in the target directory, but that's kind of ugly too. Instead, we use Python's set functionality to figure out what files were added while we executed the rcp command. This isn't perfect - for instance, it's not correct if someone else is messing with the directory at the same time we're doing the remote copy - but it's about as good as we're going to get.
    • Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the scp command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing IOError if we don't copy any files from the remote host.

    _copyRemoteFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, ownership=None, permissions=None, overwrite=True)
    Static Method

    source code 

    Copies a remote source file to a target file.

    Parameters:
    • remoteUser (String representing a username, valid via the copy command) - Name of the Cedar Backup user on the remote peer
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rcpCommandList (Command as a list to be passed to util.executeCommand) - An rcp-compatible copy command to use for copying files
    • sourceFile (String representing a file on disk, as an absolute path) - Source file to copy
    • targetFile (String representing a file on disk, as an absolute path) - Target file to create
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    • overwrite (Boolean true/false.) - Indicates whether it's OK to overwrite the target file.
    Raises:
    • IOError - If the target file already exists.
    • IOError - If there is an IO error copying the file
    • OSError - If there is an OS error changing permissions on the file
    Notes:
    • Internally, we have to go through and escape any spaces in the source path with double-backslash, otherwise things get screwed up. It doesn't seem to be required in the target path. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH).
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception.
    • Apparently, we can't count on all rcp-compatible implementations to return sensible errors for some error conditions. As an example, the scp command in Debian 'woody' returns a zero (normal) status even when it can't find a host or if the login or path is invalid. We try to work around this by issuing IOError the target file does not exist when we're done.

    _pushLocalFile(remoteUser, localUser, remoteHost, rcpCommand, rcpCommandList, sourceFile, targetFile, overwrite=True)
    Static Method

    source code 

    Copies a local source file to a remote host.

    Parameters:
    • remoteUser (String representing a username, valid via the copy command) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • rcpCommand (String representing a system command including required arguments) - An rcp-compatible copy command to use for copying files from the peer
    • rcpCommandList (Command as a list to be passed to util.executeCommand) - An rcp-compatible copy command to use for copying files
    • sourceFile (String representing a file on disk, as an absolute path) - Source file to copy
    • targetFile (String representing a file on disk, as an absolute path) - Target file to create
    • overwrite (Boolean true/false.) - Indicates whether it's OK to overwrite the target file.
    Raises:
    • IOError - If there is an IO error copying the file
    • OSError - If there is an OS error changing permissions on the file
    Notes:
    • We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception.
    • Internally, we have to go through and escape any spaces in the source and target paths with double-backslash, otherwise things get screwed up. I hope this is portable to various different rcp methods, but I guess it might not be (all I have to test with is OpenSSH).
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string and cannot be None.

    Raises:
    • ValueError - If the value is an empty string or None.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path and cannot be None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is None or is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setWorkingDir(self, value)

    source code 

    Property target used to set the working directory. The value must be an absolute path and cannot be None.

    Raises:
    • ValueError - If the value is None or is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setRemoteUser(self, value)

    source code 

    Property target used to set the remote user. The value must be a non-empty string and cannot be None.

    Raises:
    • ValueError - If the value is an empty string or None.

    _setLocalUser(self, value)

    source code 

    Property target used to set the local user. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.

    _setRcpCommand(self, value)

    source code 

    Property target to set the rcp command.

    The value must be a non-empty string or None. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to util.executeCommand via util.splitCommandLine.

    However, all the caller will ever see via the property is the actual value they set (which includes seeing None, even if we translate that internally to DEF_RCP_COMMAND). Internally, we should always use self._rcpCommandList if we want the actual command list.

    Raises:
    • ValueError - If the value is an empty string.

    _setRshCommand(self, value)

    source code 

    Property target to set the rsh command.

    The value must be a non-empty string or None. Its value is stored in the two forms: "raw" as provided by the client, and "parsed" into a list suitable for being passed to util.executeCommand via util.splitCommandLine.

    However, all the caller will ever see via the property is the actual value they set (which includes seeing None, even if we translate that internally to DEF_RSH_COMMAND). Internally, we should always use self._rshCommandList if we want the actual command list.

    Raises:
    • ValueError - If the value is an empty string.

    _setCbackCommand(self, value)

    source code 

    Property target to set the cback command.

    The value must be a non-empty string or None. Unlike the other command, this value is only stored in the "raw" form provided by the client.

    Raises:
    • ValueError - If the value is an empty string.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _executeRemoteCommand(remoteUser, localUser, remoteHost, rshCommand, rshCommandList, remoteCommand)
    Static Method

    source code 

    Executes a command on the peer via remote shell.

    Parameters:
    • remoteUser (String representing a username, valid on the remote host) - Name of the Cedar Backup user on the remote peer
    • localUser (String representing a username, valid on the current host) - Name of the Cedar Backup user on the current host
    • remoteHost (String representing a hostname, accessible via the copy command) - Hostname of the remote peer
    • rshCommand (String representing a system command including required arguments) - An rsh-compatible copy command to use for remote shells to the peer
    • rshCommandList (Command as a list to be passed to util.executeCommand) - An rsh-compatible copy command to use for remote shells to the peer
    • remoteCommand (String command-line, with no special shell characters ($, <, etc.)) - The command to be executed on the remote host
    Raises:
    • IOError - If there is an error executing the remote command

    _buildCbackCommand(cbackCommand, action, fullBackup)
    Static Method

    source code 

    Builds a Cedar Backup command line for the named action.

    Parameters:
    • cbackCommand - cback command to execute, including required options
    • action - Name of the action to execute.
    • fullBackup - Whether a full backup should be executed.
    Returns:
    String suitable for passing to _executeRemoteCommand as remoteCommand.
    Raises:
    • ValueError - If action is None.

    Note: If the cback command is None, then DEF_CBACK_COMMAND is used.


    Property Details [hide private]

    name

    Name of the peer (a valid DNS hostname).

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Path to the peer's collect directory (an absolute local path).

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    remoteUser

    Name of the Cedar Backup user on the remote peer.

    Get Method:
    _getRemoteUser(self) - Property target used to get the remote user.
    Set Method:
    _setRemoteUser(self, value) - Property target used to set the remote user.

    rcpCommand

    An rcp-compatible copy command to use for copying files.

    Get Method:
    _getRcpCommand(self) - Property target used to get the rcp command.
    Set Method:
    _setRcpCommand(self, value) - Property target to set the rcp command.

    rshCommand

    An rsh-compatible command to use for remote shells to the peer.

    Get Method:
    _getRshCommand(self) - Property target used to get the rsh command.
    Set Method:
    _setRshCommand(self, value) - Property target to set the rsh command.

    cbackCommand

    A chack-compatible command to use for executing managed actions.

    Get Method:
    _getCbackCommand(self) - Property target used to get the cback command.
    Set Method:
    _setCbackCommand(self, value) - Property target to set the cback command.

    workingDir

    Path to the peer's working directory (an absolute local path).

    Get Method:
    _getWorkingDir(self) - Property target used to get the working directory.
    Set Method:
    _setWorkingDir(self, value) - Property target used to set the working directory.

    localUser

    Name of the Cedar Backup user on the current host.

    Get Method:
    _getLocalUser(self) - Property target used to get the local user.
    Set Method:
    _setLocalUser(self, value) - Property target used to set the local user.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.RegexList-class.html0000664000175000017500000003554512642035644027405 0ustar pronovicpronovic00000000000000 CedarBackup2.util.RegexList
    Package CedarBackup2 :: Module util :: Class RegexList
    [hide private]
    [frames] | no frames]

    Class RegexList

    source code

    object --+        
             |        
          list --+    
                 |    
     UnorderedList --+
                     |
                    RegexList
    

    Class representing a list of valid regular expression strings.

    This is an unordered list.

    We override the append, insert and extend methods to ensure that any item added to the list is a valid regular expression.

    Instance Methods [hide private]
     
    append(self, item)
    Overrides the standard append method.
    source code
     
    insert(self, index, item)
    Overrides the standard insert method.
    source code
     
    extend(self, seq)
    Overrides the standard insert method.
    source code

    Inherited from UnorderedList: __eq__, __ge__, __gt__, __le__, __lt__, __ne__

    Inherited from list: __add__, __contains__, __delitem__, __delslice__, __getattribute__, __getitem__, __getslice__, __iadd__, __imul__, __init__, __iter__, __len__, __mul__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, count, index, pop, remove, reverse, sort

    Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from list: __hash__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    append(self, item)

    source code 

    Overrides the standard append method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.append

    insert(self, index, item)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If item is not an absolute path.
    Overrides: list.insert

    extend(self, seq)

    source code 

    Overrides the standard insert method.

    Raises:
    • ValueError - If any item is not an absolute path.
    Overrides: list.extend

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.subversion.SubversionConfig-class.html0000664000175000017500000010152612642035644033465 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.SubversionConfig
    Package CedarBackup2 :: Package extend :: Module subversion :: Class SubversionConfig
    [hide private]
    [frames] | no frames]

    Class SubversionConfig

    source code

    object --+
             |
            SubversionConfig
    

    Class representing Subversion configuration.

    Subversion configuration is used for backing up Subversion repositories.

    The following restrictions exist on data in this class:

    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The repositories list must be a list of Repository objects.
    • The repositoryDirs list must be a list of RepositoryDir objects.

    For the two lists, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element has the correct type.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None)
    Constructor for the SubversionConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setRepositories(self, value)
    Property target used to set the repositories list.
    source code
     
    _getRepositories(self)
    Property target used to get the repositories list.
    source code
     
    _setRepositoryDirs(self, value)
    Property target used to set the repositoryDirs list.
    source code
     
    _getRepositoryDirs(self)
    Property target used to get the repositoryDirs list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      collectMode
    Default collect mode.
      compressMode
    Default compress mode.
      repositories
    List of Subversion repositories to back up.
      repositoryDirs
    List of Subversion parent directories to back up.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, collectMode=None, compressMode=None, repositories=None, repositoryDirs=None)
    (Constructor)

    source code 

    Constructor for the SubversionConfig class.

    Parameters:
    • collectMode - Default collect mode.
    • compressMode - Default compress mode.
    • repositories - List of Subversion repositories to back up.
    • repositoryDirs - List of Subversion parent directories to back up.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setRepositories(self, value)

    source code 

    Property target used to set the repositories list. Either the value must be None or each element must be a Repository.

    Raises:
    • ValueError - If the value is not a Repository

    _setRepositoryDirs(self, value)

    source code 

    Property target used to set the repositoryDirs list. Either the value must be None or each element must be a Repository.

    Raises:
    • ValueError - If the value is not a Repository

    Property Details [hide private]

    collectMode

    Default collect mode.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Default compress mode.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    repositories

    List of Subversion repositories to back up.

    Get Method:
    _getRepositories(self) - Property target used to get the repositories list.
    Set Method:
    _setRepositories(self, value) - Property target used to set the repositories list.

    repositoryDirs

    List of Subversion parent directories to back up.

    Get Method:
    _getRepositoryDirs(self) - Property target used to get the repositoryDirs list.
    Set Method:
    _setRepositoryDirs(self, value) - Property target used to set the repositoryDirs list.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.util-pysrc.html0000664000175000017500000062000612642035644027221 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.util
    Package CedarBackup2 :: Package writers :: Module util
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.writers.util

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Provides utilities related to image writers. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides utilities related to image writers. 
     40  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     41  """ 
     42   
     43   
     44  ######################################################################## 
     45  # Imported modules 
     46  ######################################################################## 
     47   
     48  # System modules 
     49  import os 
     50  import re 
     51  import logging 
     52   
     53  # Cedar Backup modules 
     54  from CedarBackup2.util import resolveCommand, executeCommand 
     55  from CedarBackup2.util import convertSize, UNIT_BYTES, UNIT_SECTORS, encodePath 
     56   
     57   
     58  ######################################################################## 
     59  # Module-wide constants and variables 
     60  ######################################################################## 
     61   
     62  logger = logging.getLogger("CedarBackup2.log.writers.util") 
     63   
     64  MKISOFS_COMMAND      = [ "mkisofs", ] 
     65  VOLNAME_COMMAND      = [ "volname", ] 
    
    66 67 68 ######################################################################## 69 # Functions used to portably validate certain kinds of values 70 ######################################################################## 71 72 ############################ 73 # validateDevice() function 74 ############################ 75 76 -def validateDevice(device, unittest=False):
    77 """ 78 Validates a configured device. 79 The device must be an absolute path, must exist, and must be writable. 80 The unittest flag turns off validation of the device on disk. 81 @param device: Filesystem device path. 82 @param unittest: Indicates whether we're unit testing. 83 @return: Device as a string, for instance C{"/dev/cdrw"} 84 @raise ValueError: If the device value is invalid. 85 @raise ValueError: If some path cannot be encoded properly. 86 """ 87 if device is None: 88 raise ValueError("Device must be filled in.") 89 device = encodePath(device) 90 if not os.path.isabs(device): 91 raise ValueError("Backup device must be an absolute path.") 92 if not unittest and not os.path.exists(device): 93 raise ValueError("Backup device must exist on disk.") 94 if not unittest and not os.access(device, os.W_OK): 95 raise ValueError("Backup device is not writable by the current user.") 96 return device
    97
    98 99 ############################ 100 # validateScsiId() function 101 ############################ 102 103 -def validateScsiId(scsiId):
    104 """ 105 Validates a SCSI id string. 106 SCSI id must be a string in the form C{[<method>:]scsibus,target,lun}. 107 For Mac OS X (Darwin), we also accept the form C{IO.*Services[/N]}. 108 @note: For consistency, if C{None} is passed in, C{None} will be returned. 109 @param scsiId: SCSI id for the device. 110 @return: SCSI id as a string, for instance C{"ATA:1,0,0"} 111 @raise ValueError: If the SCSI id string is invalid. 112 """ 113 if scsiId is not None: 114 pattern = re.compile(r"^\s*(.*:)?\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*,\s*[0-9][0-9]*\s*$") 115 if not pattern.search(scsiId): 116 pattern = re.compile(r"^\s*IO.*Services(\/[0-9][0-9]*)?\s*$") 117 if not pattern.search(scsiId): 118 raise ValueError("SCSI id is not in a valid form.") 119 return scsiId
    120
    121 122 ################################ 123 # validateDriveSpeed() function 124 ################################ 125 126 -def validateDriveSpeed(driveSpeed):
    127 """ 128 Validates a drive speed value. 129 Drive speed must be an integer which is >= 1. 130 @note: For consistency, if C{None} is passed in, C{None} will be returned. 131 @param driveSpeed: Speed at which the drive writes. 132 @return: Drive speed as an integer 133 @raise ValueError: If the drive speed value is invalid. 134 """ 135 if driveSpeed is None: 136 return None 137 try: 138 intSpeed = int(driveSpeed) 139 except TypeError: 140 raise ValueError("Drive speed must be an integer >= 1.") 141 if intSpeed < 1: 142 raise ValueError("Drive speed must an integer >= 1.") 143 return intSpeed
    144
    145 146 ######################################################################## 147 # General writer-related utility functions 148 ######################################################################## 149 150 ############################ 151 # readMediaLabel() function 152 ############################ 153 154 -def readMediaLabel(devicePath):
    155 """ 156 Reads the media label (volume name) from the indicated device. 157 The volume name is read using the C{volname} command. 158 @param devicePath: Device path to read from 159 @return: Media label as a string, or None if there is no name or it could not be read. 160 """ 161 args = [ devicePath, ] 162 command = resolveCommand(VOLNAME_COMMAND) 163 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 164 if result != 0: 165 return None 166 if output is None or len(output) < 1: 167 return None 168 return output[0].rstrip()
    169
    170 171 ######################################################################## 172 # IsoImage class definition 173 ######################################################################## 174 175 -class IsoImage(object):
    176 177 ###################### 178 # Class documentation 179 ###################### 180 181 """ 182 Represents an ISO filesystem image. 183 184 Summary 185 ======= 186 187 This object represents an ISO 9660 filesystem image. It is implemented 188 in terms of the C{mkisofs} program, which has been ported to many 189 operating systems and platforms. A "sensible subset" of the C{mkisofs} 190 functionality is made available through the public interface, allowing 191 callers to set a variety of basic options such as publisher id, 192 application id, etc. as well as specify exactly which files and 193 directories they want included in their image. 194 195 By default, the image is created using the Rock Ridge protocol (using the 196 C{-r} option to C{mkisofs}) because Rock Ridge discs are generally more 197 useful on UN*X filesystems than standard ISO 9660 images. However, 198 callers can fall back to the default C{mkisofs} functionality by setting 199 the C{useRockRidge} instance variable to C{False}. Note, however, that 200 this option is not well-tested. 201 202 Where Files and Directories are Placed in the Image 203 =================================================== 204 205 Although this class is implemented in terms of the C{mkisofs} program, 206 its standard "image contents" semantics are slightly different than the original 207 C{mkisofs} semantics. The difference is that files and directories are 208 added to the image with some additional information about their source 209 directory kept intact. 210 211 As an example, suppose you add the file C{/etc/profile} to your image and 212 you do not configure a graft point. The file C{/profile} will be created 213 in the image. The behavior for directories is similar. For instance, 214 suppose that you add C{/etc/X11} to the image and do not configure a 215 graft point. In this case, the directory C{/X11} will be created in the 216 image, even if the original C{/etc/X11} directory is empty. I{This 217 behavior differs from the standard C{mkisofs} behavior!} 218 219 If a graft point is configured, it will be used to modify the point at 220 which a file or directory is added into an image. Using the examples 221 from above, let's assume you set a graft point of C{base} when adding 222 C{/etc/profile} and C{/etc/X11} to your image. In this case, the file 223 C{/base/profile} and the directory C{/base/X11} would be added to the 224 image. 225 226 I feel that this behavior is more consistent than the original C{mkisofs} 227 behavior. However, to be fair, it is not quite as flexible, and some 228 users might not like it. For this reason, the C{contentsOnly} parameter 229 to the L{addEntry} method can be used to revert to the original behavior 230 if desired. 231 232 @sort: __init__, addEntry, getEstimatedSize, _getEstimatedSize, writeImage, 233 _buildDirEntries _buildGeneralArgs, _buildSizeArgs, _buildWriteArgs, 234 device, boundaries, graftPoint, useRockRidge, applicationId, 235 biblioFile, publisherId, preparerId, volumeId 236 """ 237 238 ############## 239 # Constructor 240 ############## 241
    242 - def __init__(self, device=None, boundaries=None, graftPoint=None):
    243 """ 244 Initializes an empty ISO image object. 245 246 Only the most commonly-used configuration items can be set using this 247 constructor. If you have a need to change the others, do so immediately 248 after creating your object. 249 250 The device and boundaries values are both required in order to write 251 multisession discs. If either is missing or C{None}, a multisession disc 252 will not be written. The boundaries tuple is in terms of ISO sectors, as 253 built by an image writer class and returned in a L{writer.MediaCapacity} 254 object. 255 256 @param device: Name of the device that the image will be written to 257 @type device: Either be a filesystem path or a SCSI address 258 259 @param boundaries: Session boundaries as required by C{mkisofs} 260 @type boundaries: Tuple C{(last_sess_start,next_sess_start)} as returned from C{cdrecord -msinfo}, or C{None} 261 262 @param graftPoint: Default graft point for this page. 263 @type graftPoint: String representing a graft point path (see L{addEntry}). 264 """ 265 self._device = None 266 self._boundaries = None 267 self._graftPoint = None 268 self._useRockRidge = True 269 self._applicationId = None 270 self._biblioFile = None 271 self._publisherId = None 272 self._preparerId = None 273 self._volumeId = None 274 self.entries = { } 275 self.device = device 276 self.boundaries = boundaries 277 self.graftPoint = graftPoint 278 self.useRockRidge = True 279 self.applicationId = None 280 self.biblioFile = None 281 self.publisherId = None 282 self.preparerId = None 283 self.volumeId = None 284 logger.debug("Created new ISO image object.")
    285 286 287 ############# 288 # Properties 289 ############# 290
    291 - def _setDevice(self, value):
    292 """ 293 Property target used to set the device value. 294 If not C{None}, the value can be either an absolute path or a SCSI id. 295 @raise ValueError: If the value is not valid 296 """ 297 try: 298 if value is None: 299 self._device = None 300 else: 301 if os.path.isabs(value): 302 self._device = value 303 else: 304 self._device = validateScsiId(value) 305 except ValueError: 306 raise ValueError("Device must either be an absolute path or a valid SCSI id.")
    307
    308 - def _getDevice(self):
    309 """ 310 Property target used to get the device value. 311 """ 312 return self._device
    313
    314 - def _setBoundaries(self, value):
    315 """ 316 Property target used to set the boundaries tuple. 317 If not C{None}, the value must be a tuple of two integers. 318 @raise ValueError: If the tuple values are not integers. 319 @raise IndexError: If the tuple does not contain enough elements. 320 """ 321 if value is None: 322 self._boundaries = None 323 else: 324 self._boundaries = (int(value[0]), int(value[1]))
    325
    326 - def _getBoundaries(self):
    327 """ 328 Property target used to get the boundaries value. 329 """ 330 return self._boundaries
    331
    332 - def _setGraftPoint(self, value):
    333 """ 334 Property target used to set the graft point. 335 The value must be a non-empty string if it is not C{None}. 336 @raise ValueError: If the value is an empty string. 337 """ 338 if value is not None: 339 if len(value) < 1: 340 raise ValueError("The graft point must be a non-empty string.") 341 self._graftPoint = value
    342
    343 - def _getGraftPoint(self):
    344 """ 345 Property target used to get the graft point. 346 """ 347 return self._graftPoint
    348
    349 - def _setUseRockRidge(self, value):
    350 """ 351 Property target used to set the use RockRidge flag. 352 No validations, but we normalize the value to C{True} or C{False}. 353 """ 354 if value: 355 self._useRockRidge = True 356 else: 357 self._useRockRidge = False
    358
    359 - def _getUseRockRidge(self):
    360 """ 361 Property target used to get the use RockRidge flag. 362 """ 363 return self._useRockRidge
    364
    365 - def _setApplicationId(self, value):
    366 """ 367 Property target used to set the application id. 368 The value must be a non-empty string if it is not C{None}. 369 @raise ValueError: If the value is an empty string. 370 """ 371 if value is not None: 372 if len(value) < 1: 373 raise ValueError("The application id must be a non-empty string.") 374 self._applicationId = value
    375
    376 - def _getApplicationId(self):
    377 """ 378 Property target used to get the application id. 379 """ 380 return self._applicationId
    381
    382 - def _setBiblioFile(self, value):
    383 """ 384 Property target used to set the biblio file. 385 The value must be a non-empty string if it is not C{None}. 386 @raise ValueError: If the value is an empty string. 387 """ 388 if value is not None: 389 if len(value) < 1: 390 raise ValueError("The biblio file must be a non-empty string.") 391 self._biblioFile = value
    392
    393 - def _getBiblioFile(self):
    394 """ 395 Property target used to get the biblio file. 396 """ 397 return self._biblioFile
    398
    399 - def _setPublisherId(self, value):
    400 """ 401 Property target used to set the publisher id. 402 The value must be a non-empty string if it is not C{None}. 403 @raise ValueError: If the value is an empty string. 404 """ 405 if value is not None: 406 if len(value) < 1: 407 raise ValueError("The publisher id must be a non-empty string.") 408 self._publisherId = value
    409
    410 - def _getPublisherId(self):
    411 """ 412 Property target used to get the publisher id. 413 """ 414 return self._publisherId
    415
    416 - def _setPreparerId(self, value):
    417 """ 418 Property target used to set the preparer id. 419 The value must be a non-empty string if it is not C{None}. 420 @raise ValueError: If the value is an empty string. 421 """ 422 if value is not None: 423 if len(value) < 1: 424 raise ValueError("The preparer id must be a non-empty string.") 425 self._preparerId = value
    426
    427 - def _getPreparerId(self):
    428 """ 429 Property target used to get the preparer id. 430 """ 431 return self._preparerId
    432
    433 - def _setVolumeId(self, value):
    434 """ 435 Property target used to set the volume id. 436 The value must be a non-empty string if it is not C{None}. 437 @raise ValueError: If the value is an empty string. 438 """ 439 if value is not None: 440 if len(value) < 1: 441 raise ValueError("The volume id must be a non-empty string.") 442 self._volumeId = value
    443
    444 - def _getVolumeId(self):
    445 """ 446 Property target used to get the volume id. 447 """ 448 return self._volumeId
    449 450 device = property(_getDevice, _setDevice, None, "Device that image will be written to (device path or SCSI id).") 451 boundaries = property(_getBoundaries, _setBoundaries, None, "Session boundaries as required by C{mkisofs}.") 452 graftPoint = property(_getGraftPoint, _setGraftPoint, None, "Default image-wide graft point (see L{addEntry} for details).") 453 useRockRidge = property(_getUseRockRidge, _setUseRockRidge, None, "Indicates whether to use RockRidge (default is C{True}).") 454 applicationId = property(_getApplicationId, _setApplicationId, None, "Optionally specifies the ISO header application id value.") 455 biblioFile = property(_getBiblioFile, _setBiblioFile, None, "Optionally specifies the ISO bibliographic file name.") 456 publisherId = property(_getPublisherId, _setPublisherId, None, "Optionally specifies the ISO header publisher id value.") 457 preparerId = property(_getPreparerId, _setPreparerId, None, "Optionally specifies the ISO header preparer id value.") 458 volumeId = property(_getVolumeId, _setVolumeId, None, "Optionally specifies the ISO header volume id value.") 459 460 461 ######################### 462 # General public methods 463 ######################### 464
    465 - def addEntry(self, path, graftPoint=None, override=False, contentsOnly=False):
    466 """ 467 Adds an individual file or directory into the ISO image. 468 469 The path must exist and must be a file or a directory. By default, the 470 entry will be placed into the image at the root directory, but this 471 behavior can be overridden using the C{graftPoint} parameter or instance 472 variable. 473 474 You can use the C{contentsOnly} behavior to revert to the "original" 475 C{mkisofs} behavior for adding directories, which is to add only the 476 items within the directory, and not the directory itself. 477 478 @note: Things get I{odd} if you try to add a directory to an image that 479 will be written to a multisession disc, and the same directory already 480 exists in an earlier session on that disc. Not all of the data gets 481 written. You really wouldn't want to do this anyway, I guess. 482 483 @note: An exception will be thrown if the path has already been added to 484 the image, unless the C{override} parameter is set to C{True}. 485 486 @note: The method C{graftPoints} parameter overrides the object-wide 487 instance variable. If neither the method parameter or object-wide value 488 is set, the path will be written at the image root. The graft point 489 behavior is determined by the value which is in effect I{at the time this 490 method is called}, so you I{must} set the object-wide value before 491 calling this method for the first time, or your image may not be 492 consistent. 493 494 @note: You I{cannot} use the local C{graftPoint} parameter to "turn off" 495 an object-wide instance variable by setting it to C{None}. Python's 496 default argument functionality buys us a lot, but it can't make this 497 method psychic. :) 498 499 @param path: File or directory to be added to the image 500 @type path: String representing a path on disk 501 502 @param graftPoint: Graft point to be used when adding this entry 503 @type graftPoint: String representing a graft point path, as described above 504 505 @param override: Override an existing entry with the same path. 506 @type override: Boolean true/false 507 508 @param contentsOnly: Add directory contents only (standard C{mkisofs} behavior). 509 @type contentsOnly: Boolean true/false 510 511 @raise ValueError: If path is not a file or directory, or does not exist. 512 @raise ValueError: If the path has already been added, and override is not set. 513 @raise ValueError: If a path cannot be encoded properly. 514 """ 515 path = encodePath(path) 516 if not override: 517 if path in self.entries.keys(): 518 raise ValueError("Path has already been added to the image.") 519 if os.path.islink(path): 520 raise ValueError("Path must not be a link.") 521 if os.path.isdir(path): 522 if graftPoint is not None: 523 if contentsOnly: 524 self.entries[path] = graftPoint 525 else: 526 self.entries[path] = os.path.join(graftPoint, os.path.basename(path)) 527 elif self.graftPoint is not None: 528 if contentsOnly: 529 self.entries[path] = self.graftPoint 530 else: 531 self.entries[path] = os.path.join(self.graftPoint, os.path.basename(path)) 532 else: 533 if contentsOnly: 534 self.entries[path] = None 535 else: 536 self.entries[path] = os.path.basename(path) 537 elif os.path.isfile(path): 538 if graftPoint is not None: 539 self.entries[path] = graftPoint 540 elif self.graftPoint is not None: 541 self.entries[path] = self.graftPoint 542 else: 543 self.entries[path] = None 544 else: 545 raise ValueError("Path must be a file or a directory.")
    546
    547 - def getEstimatedSize(self):
    548 """ 549 Returns the estimated size (in bytes) of the ISO image. 550 551 This is implemented via the C{-print-size} option to C{mkisofs}, so it 552 might take a bit of time to execute. However, the result is as accurate 553 as we can get, since it takes into account all of the ISO overhead, the 554 true cost of directories in the structure, etc, etc. 555 556 @return: Estimated size of the image, in bytes. 557 558 @raise IOError: If there is a problem calling C{mkisofs}. 559 @raise ValueError: If there are no filesystem entries in the image 560 """ 561 if len(self.entries.keys()) == 0: 562 raise ValueError("Image does not contain any entries.") 563 return self._getEstimatedSize(self.entries)
    564
    565 - def _getEstimatedSize(self, entries):
    566 """ 567 Returns the estimated size (in bytes) for the passed-in entries dictionary. 568 @return: Estimated size of the image, in bytes. 569 @raise IOError: If there is a problem calling C{mkisofs}. 570 """ 571 args = self._buildSizeArgs(entries) 572 command = resolveCommand(MKISOFS_COMMAND) 573 (result, output) = executeCommand(command, args, returnOutput=True, ignoreStderr=True) 574 if result != 0: 575 raise IOError("Error (%d) executing mkisofs command to estimate size." % result) 576 if len(output) != 1: 577 raise IOError("Unable to parse mkisofs output.") 578 try: 579 sectors = float(output[0]) 580 size = convertSize(sectors, UNIT_SECTORS, UNIT_BYTES) 581 return size 582 except: 583 raise IOError("Unable to parse mkisofs output.")
    584
    585 - def writeImage(self, imagePath):
    586 """ 587 Writes this image to disk using the image path. 588 589 @param imagePath: Path to write image out as 590 @type imagePath: String representing a path on disk 591 592 @raise IOError: If there is an error writing the image to disk. 593 @raise ValueError: If there are no filesystem entries in the image 594 @raise ValueError: If a path cannot be encoded properly. 595 """ 596 imagePath = encodePath(imagePath) 597 if len(self.entries.keys()) == 0: 598 raise ValueError("Image does not contain any entries.") 599 args = self._buildWriteArgs(self.entries, imagePath) 600 command = resolveCommand(MKISOFS_COMMAND) 601 (result, output) = executeCommand(command, args, returnOutput=False) 602 if result != 0: 603 raise IOError("Error (%d) executing mkisofs command to build image." % result)
    604 605 606 ######################################### 607 # Methods used to build mkisofs commands 608 ######################################### 609 610 @staticmethod
    611 - def _buildDirEntries(entries):
    612 """ 613 Uses an entries dictionary to build a list of directory locations for use 614 by C{mkisofs}. 615 616 We build a list of entries that can be passed to C{mkisofs}. Each entry is 617 either raw (if no graft point was configured) or in graft-point form as 618 described above (if a graft point was configured). The dictionary keys 619 are the path names, and the values are the graft points, if any. 620 621 @param entries: Dictionary of image entries (i.e. self.entries) 622 623 @return: List of directory locations for use by C{mkisofs} 624 """ 625 dirEntries = [] 626 for key in entries.keys(): 627 if entries[key] is None: 628 dirEntries.append(key) 629 else: 630 dirEntries.append("%s/=%s" % (entries[key].strip("/"), key)) 631 return dirEntries
    632
    633 - def _buildGeneralArgs(self):
    634 """ 635 Builds a list of general arguments to be passed to a C{mkisofs} command. 636 637 The various instance variables (C{applicationId}, etc.) are filled into 638 the list of arguments if they are set. 639 By default, we will build a RockRidge disc. If you decide to change 640 this, think hard about whether you know what you're doing. This option 641 is not well-tested. 642 643 @return: List suitable for passing to L{util.executeCommand} as C{args}. 644 """ 645 args = [] 646 if self.applicationId is not None: 647 args.append("-A") 648 args.append(self.applicationId) 649 if self.biblioFile is not None: 650 args.append("-biblio") 651 args.append(self.biblioFile) 652 if self.publisherId is not None: 653 args.append("-publisher") 654 args.append(self.publisherId) 655 if self.preparerId is not None: 656 args.append("-p") 657 args.append(self.preparerId) 658 if self.volumeId is not None: 659 args.append("-V") 660 args.append(self.volumeId) 661 return args
    662
    663 - def _buildSizeArgs(self, entries):
    664 """ 665 Builds a list of arguments to be passed to a C{mkisofs} command. 666 667 The various instance variables (C{applicationId}, etc.) are filled into 668 the list of arguments if they are set. The command will be built to just 669 return size output (a simple count of sectors via the C{-print-size} option), 670 rather than an image file on disk. 671 672 By default, we will build a RockRidge disc. If you decide to change 673 this, think hard about whether you know what you're doing. This option 674 is not well-tested. 675 676 @param entries: Dictionary of image entries (i.e. self.entries) 677 678 @return: List suitable for passing to L{util.executeCommand} as C{args}. 679 """ 680 args = self._buildGeneralArgs() 681 args.append("-print-size") 682 args.append("-graft-points") 683 if self.useRockRidge: 684 args.append("-r") 685 if self.device is not None and self.boundaries is not None: 686 args.append("-C") 687 args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) 688 args.append("-M") 689 args.append(self.device) 690 args.extend(self._buildDirEntries(entries)) 691 return args
    692
    693 - def _buildWriteArgs(self, entries, imagePath):
    694 """ 695 Builds a list of arguments to be passed to a C{mkisofs} command. 696 697 The various instance variables (C{applicationId}, etc.) are filled into 698 the list of arguments if they are set. The command will be built to write 699 an image to disk. 700 701 By default, we will build a RockRidge disc. If you decide to change 702 this, think hard about whether you know what you're doing. This option 703 is not well-tested. 704 705 @param entries: Dictionary of image entries (i.e. self.entries) 706 707 @param imagePath: Path to write image out as 708 @type imagePath: String representing a path on disk 709 710 @return: List suitable for passing to L{util.executeCommand} as C{args}. 711 """ 712 args = self._buildGeneralArgs() 713 args.append("-graft-points") 714 if self.useRockRidge: 715 args.append("-r") 716 args.append("-o") 717 args.append(imagePath) 718 if self.device is not None and self.boundaries is not None: 719 args.append("-C") 720 args.append("%d,%d" % (self.boundaries[0], self.boundaries[1])) 721 args.append("-M") 722 args.append(self.device) 723 args.extend(self._buildDirEntries(entries)) 724 return args
    725

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.image-module.html0000664000175000017500000001317312642035643025755 0ustar pronovicpronovic00000000000000 CedarBackup2.image
    Package CedarBackup2 :: Module image
    [hide private]
    [frames] | no frames]

    Module image

    source code

    Provides interface backwards compatibility.

    In Cedar Backup 2.10.0, a refactoring effort took place while adding code to support DVD hardware. All of the writer functionality was moved to the writers/ package. This mostly-empty file remains to preserve the Cedar Backup library interface.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      __package__ = 'CedarBackup2'
    CedarBackup2-2.26.5/doc/interface/CedarBackup2.filesystem.SpanItem-class.html0000664000175000017500000002265712642035644030426 0ustar pronovicpronovic00000000000000 CedarBackup2.filesystem.SpanItem
    Package CedarBackup2 :: Module filesystem :: Class SpanItem
    [hide private]
    [frames] | no frames]

    Class SpanItem

    source code

    object --+
             |
            SpanItem
    

    Item returned by BackupFileList.generateSpan.

    Instance Methods [hide private]
     
    __init__(self, fileList, size, capacity, utilization)
    Create object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, fileList, size, capacity, utilization)
    (Constructor)

    source code 

    Create object.

    Parameters:
    • fileList - List of files
    • size - Size (in bytes) of files
    • utilization - Utilization, as a percentage (0-100)
    Overrides: object.__init__

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.initialize-pysrc.html0000664000175000017500000006526112642035647030357 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.initialize
    Package CedarBackup2 :: Package actions :: Module initialize
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.initialize

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Copyright (c) 2007,2010 Kenneth J. Pronovici. 
    12  # All rights reserved. 
    13  # 
    14  # This program is free software; you can redistribute it and/or 
    15  # modify it under the terms of the GNU General Public License, 
    16  # Version 2, as published by the Free Software Foundation. 
    17  # 
    18  # This program is distributed in the hope that it will be useful, 
    19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
    20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
    21  # 
    22  # Copies of the GNU General Public License are available from 
    23  # the Free Software Foundation website, http://www.gnu.org/. 
    24  # 
    25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    26  # 
    27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    28  # Language : Python 2 (>= 2.7) 
    29  # Project  : Cedar Backup, release 2 
    30  # Purpose  : Implements the standard 'initialize' action. 
    31  # 
    32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    33   
    34  ######################################################################## 
    35  # Module documentation 
    36  ######################################################################## 
    37   
    38  """ 
    39  Implements the standard 'initialize' action. 
    40  @sort: executeInitialize 
    41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    42  """ 
    43   
    44   
    45  ######################################################################## 
    46  # Imported modules 
    47  ######################################################################## 
    48   
    49  # System modules 
    50  import logging 
    51   
    52  # Cedar Backup modules 
    53  from CedarBackup2.actions.util import initializeMediaState 
    54   
    55   
    56  ######################################################################## 
    57  # Module-wide constants and variables 
    58  ######################################################################## 
    59   
    60  logger = logging.getLogger("CedarBackup2.log.actions.initialize") 
    61   
    62   
    63  ######################################################################## 
    64  # Public functions 
    65  ######################################################################## 
    66   
    67  ############################### 
    68  # executeInitialize() function 
    69  ############################### 
    70   
    
    71 -def executeInitialize(configPath, options, config):
    72 """ 73 Executes the initialize action. 74 75 The initialize action initializes the media currently in the writer 76 device so that Cedar Backup can recognize it later. This is an optional 77 step; it's only required if checkMedia is set on the store configuration. 78 79 @param configPath: Path to configuration file on disk. 80 @type configPath: String representing a path on disk. 81 82 @param options: Program command-line options. 83 @type options: Options object. 84 85 @param config: Program configuration. 86 @type config: Config object. 87 """ 88 logger.debug("Executing the 'initialize' action.") 89 if config.options is None or config.store is None: 90 raise ValueError("Store configuration is not properly filled in.") 91 initializeMediaState(config) 92 logger.info("Executed the 'initialize' action successfully.")
    93

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.config-module.html0000664000175000017500000001273012642035643026721 0ustar pronovicpronovic00000000000000 config

    Module config


    Classes

    ActionDependencies
    ActionHook
    BlankBehavior
    ByteQuantity
    CollectConfig
    CollectDir
    CollectFile
    CommandOverride
    Config
    ExtendedAction
    ExtensionsConfig
    LocalPeer
    OptionsConfig
    PeersConfig
    PostActionHook
    PreActionHook
    PurgeConfig
    PurgeDir
    ReferenceConfig
    RemotePeer
    StageConfig
    StoreConfig

    Functions

    addByteQuantityNode
    readByteQuantity

    Variables

    ACTION_NAME_REGEX
    DEFAULT_DEVICE_TYPE
    DEFAULT_MEDIA_TYPE
    REWRITABLE_MEDIA_TYPES
    VALID_ARCHIVE_MODES
    VALID_BLANK_MODES
    VALID_BYTE_UNITS
    VALID_CD_MEDIA_TYPES
    VALID_COLLECT_MODES
    VALID_COMPRESS_MODES
    VALID_DEVICE_TYPES
    VALID_DVD_MEDIA_TYPES
    VALID_FAILURE_MODES
    VALID_MEDIA_TYPES
    VALID_ORDER_MODES
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.xmlutil-module.html0000664000175000017500000000763412642035643027161 0ustar pronovicpronovic00000000000000 xmlutil

    Module xmlutil


    Classes

    Serializer

    Functions

    addBooleanNode
    addContainerNode
    addIntegerNode
    addLongNode
    addStringNode
    createInputDom
    createOutputDom
    isElement
    readBoolean
    readChildren
    readFirstChild
    readFloat
    readInteger
    readLong
    readString
    readStringList
    serializeDom

    Variables

    FALSE_BOOLEAN_VALUES
    TRUE_BOOLEAN_VALUES
    VALID_BOOLEAN_VALUES
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.util-pysrc.html0000664000175000017500000042040712642035647027170 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.util
    Package CedarBackup2 :: Package actions :: Module util
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.util

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Implements action-related utilities 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements action-related utilities 
     40  @sort: findDailyDirs 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import os 
     51  import time 
     52  import tempfile 
     53  import logging 
     54   
     55  # Cedar Backup modules 
     56  from CedarBackup2.filesystem import FilesystemList 
     57  from CedarBackup2.util import changeOwnership 
     58  from CedarBackup2.util import deviceMounted 
     59  from CedarBackup2.writers.util import readMediaLabel 
     60  from CedarBackup2.writers.cdwriter import CdWriter 
     61  from CedarBackup2.writers.dvdwriter import DvdWriter 
     62  from CedarBackup2.writers.cdwriter import MEDIA_CDR_74, MEDIA_CDR_80, MEDIA_CDRW_74, MEDIA_CDRW_80 
     63  from CedarBackup2.writers.dvdwriter import MEDIA_DVDPLUSR, MEDIA_DVDPLUSRW 
     64  from CedarBackup2.config import DEFAULT_MEDIA_TYPE, DEFAULT_DEVICE_TYPE, REWRITABLE_MEDIA_TYPES 
     65  from CedarBackup2.actions.constants import INDICATOR_PATTERN 
     66   
     67   
     68  ######################################################################## 
     69  # Module-wide constants and variables 
     70  ######################################################################## 
     71   
     72  logger = logging.getLogger("CedarBackup2.log.actions.util") 
     73  MEDIA_LABEL_PREFIX   = "CEDAR BACKUP" 
     74   
     75   
     76  ######################################################################## 
     77  # Public utility functions 
     78  ######################################################################## 
     79   
     80  ########################### 
     81  # findDailyDirs() function 
     82  ########################### 
     83   
    
    84 -def findDailyDirs(stagingDir, indicatorFile):
    85 """ 86 Returns a list of all daily staging directories that do not contain 87 the indicated indicator file. 88 89 @param stagingDir: Configured staging directory (config.targetDir) 90 91 @return: List of absolute paths to daily staging directories. 92 """ 93 results = FilesystemList() 94 yearDirs = FilesystemList() 95 yearDirs.excludeFiles = True 96 yearDirs.excludeLinks = True 97 yearDirs.addDirContents(path=stagingDir, recursive=False, addSelf=False) 98 for yearDir in yearDirs: 99 monthDirs = FilesystemList() 100 monthDirs.excludeFiles = True 101 monthDirs.excludeLinks = True 102 monthDirs.addDirContents(path=yearDir, recursive=False, addSelf=False) 103 for monthDir in monthDirs: 104 dailyDirs = FilesystemList() 105 dailyDirs.excludeFiles = True 106 dailyDirs.excludeLinks = True 107 dailyDirs.addDirContents(path=monthDir, recursive=False, addSelf=False) 108 for dailyDir in dailyDirs: 109 if os.path.exists(os.path.join(dailyDir, indicatorFile)): 110 logger.debug("Skipping directory [%s]; contains %s.", dailyDir, indicatorFile) 111 else: 112 logger.debug("Adding [%s] to list of daily directories.", dailyDir) 113 results.append(dailyDir) # just put it in the list, no fancy operations 114 return results
    115 116 117 ########################### 118 # createWriter() function 119 ########################### 120
    121 -def createWriter(config):
    122 """ 123 Creates a writer object based on current configuration. 124 125 This function creates and returns a writer based on configuration. This is 126 done to abstract action functionality from knowing what kind of writer is in 127 use. Since all writers implement the same interface, there's no need for 128 actions to care which one they're working with. 129 130 Currently, the C{cdwriter} and C{dvdwriter} device types are allowed. An 131 exception will be raised if any other device type is used. 132 133 This function also checks to make sure that the device isn't mounted before 134 creating a writer object for it. Experience shows that sometimes if the 135 device is mounted, we have problems with the backup. We may as well do the 136 check here first, before instantiating the writer. 137 138 @param config: Config object. 139 140 @return: Writer that can be used to write a directory to some media. 141 142 @raise ValueError: If there is a problem getting the writer. 143 @raise IOError: If there is a problem creating the writer object. 144 """ 145 devicePath = config.store.devicePath 146 deviceScsiId = config.store.deviceScsiId 147 driveSpeed = config.store.driveSpeed 148 noEject = config.store.noEject 149 refreshMediaDelay = config.store.refreshMediaDelay 150 ejectDelay = config.store.ejectDelay 151 deviceType = _getDeviceType(config) 152 mediaType = _getMediaType(config) 153 if deviceMounted(devicePath): 154 raise IOError("Device [%s] is currently mounted." % (devicePath)) 155 if deviceType == "cdwriter": 156 return CdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) 157 elif deviceType == "dvdwriter": 158 return DvdWriter(devicePath, deviceScsiId, driveSpeed, mediaType, noEject, refreshMediaDelay, ejectDelay) 159 else: 160 raise ValueError("Device type [%s] is invalid." % deviceType)
    161 162 163 ################################ 164 # writeIndicatorFile() function 165 ################################ 166
    167 -def writeIndicatorFile(targetDir, indicatorFile, backupUser, backupGroup):
    168 """ 169 Writes an indicator file into a target directory. 170 @param targetDir: Target directory in which to write indicator 171 @param indicatorFile: Name of the indicator file 172 @param backupUser: User that indicator file should be owned by 173 @param backupGroup: Group that indicator file should be owned by 174 @raise IOException: If there is a problem writing the indicator file 175 """ 176 filename = os.path.join(targetDir, indicatorFile) 177 logger.debug("Writing indicator file [%s].", filename) 178 try: 179 open(filename, "w").write("") 180 changeOwnership(filename, backupUser, backupGroup) 181 except Exception, e: 182 logger.error("Error writing [%s]: %s", filename, e) 183 raise e
    184 185 186 ############################ 187 # getBackupFiles() function 188 ############################ 189
    190 -def getBackupFiles(targetDir):
    191 """ 192 Gets a list of backup files in a target directory. 193 194 Files that match INDICATOR_PATTERN (i.e. C{"cback.store"}, C{"cback.stage"}, 195 etc.) are assumed to be indicator files and are ignored. 196 197 @param targetDir: Directory to look in 198 199 @return: List of backup files in the directory 200 201 @raise ValueError: If the target directory does not exist 202 """ 203 if not os.path.isdir(targetDir): 204 raise ValueError("Target directory [%s] is not a directory or does not exist." % targetDir) 205 fileList = FilesystemList() 206 fileList.excludeDirs = True 207 fileList.excludeLinks = True 208 fileList.excludeBasenamePatterns = INDICATOR_PATTERN 209 fileList.addDirContents(targetDir) 210 return fileList
    211 212 213 #################### 214 # checkMediaState() 215 #################### 216
    217 -def checkMediaState(storeConfig):
    218 """ 219 Checks state of the media in the backup device to confirm whether it has 220 been initialized for use with Cedar Backup. 221 222 We can tell whether the media has been initialized by looking at its media 223 label. If the media label starts with MEDIA_LABEL_PREFIX, then it has been 224 initialized. 225 226 The check varies depending on whether the media is rewritable or not. For 227 non-rewritable media, we also accept a C{None} media label, since this kind 228 of media cannot safely be initialized. 229 230 @param storeConfig: Store configuration 231 232 @raise ValueError: If media is not initialized. 233 """ 234 mediaLabel = readMediaLabel(storeConfig.devicePath) 235 if storeConfig.mediaType in REWRITABLE_MEDIA_TYPES: 236 if mediaLabel is None: 237 raise ValueError("Media has not been initialized: no media label available") 238 elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): 239 raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel) 240 else: 241 if mediaLabel is None: 242 logger.info("Media has no media label; assuming OK since media is not rewritable.") 243 elif not mediaLabel.startswith(MEDIA_LABEL_PREFIX): 244 raise ValueError("Media has not been initialized: unrecognized media label [%s]" % mediaLabel)
    245 246 247 ######################### 248 # initializeMediaState() 249 ######################### 250
    251 -def initializeMediaState(config):
    252 """ 253 Initializes state of the media in the backup device so Cedar Backup can 254 recognize it. 255 256 This is done by writing an mostly-empty image (it contains a "Cedar Backup" 257 directory) to the media with a known media label. 258 259 @note: Only rewritable media (CD-RW, DVD+RW) can be initialized. It 260 doesn't make any sense to initialize media that cannot be rewritten (CD-R, 261 DVD+R), since Cedar Backup would then not be able to use that media for a 262 backup. 263 264 @param config: Cedar Backup configuration 265 266 @raise ValueError: If media could not be initialized. 267 @raise ValueError: If the configured media type is not rewritable 268 """ 269 if not config.store.mediaType in REWRITABLE_MEDIA_TYPES: 270 raise ValueError("Only rewritable media types can be initialized.") 271 mediaLabel = buildMediaLabel() 272 writer = createWriter(config) 273 writer.refreshMedia() 274 writer.initializeImage(True, config.options.workingDir, mediaLabel) # always create a new disc 275 tempdir = tempfile.mkdtemp(dir=config.options.workingDir) 276 try: 277 writer.addImageEntry(tempdir, "CedarBackup") 278 writer.writeImage() 279 finally: 280 if os.path.exists(tempdir): 281 try: 282 os.rmdir(tempdir) 283 except: pass
    284 285 286 #################### 287 # buildMediaLabel() 288 #################### 289
    290 -def buildMediaLabel():
    291 """ 292 Builds a media label to be used on Cedar Backup media. 293 @return: Media label as a string. 294 """ 295 currentDate = time.strftime("%d-%b-%Y").upper() 296 return "%s %s" % (MEDIA_LABEL_PREFIX, currentDate)
    297 298 299 ######################################################################## 300 # Private attribute "getter" functions 301 ######################################################################## 302 303 ############################ 304 # _getDeviceType() function 305 ############################ 306
    307 -def _getDeviceType(config):
    308 """ 309 Gets the device type that should be used for storing. 310 311 Use the configured device type if not C{None}, otherwise use 312 L{config.DEFAULT_DEVICE_TYPE}. 313 314 @param config: Config object. 315 @return: Device type to be used. 316 """ 317 if config.store.deviceType is None: 318 deviceType = DEFAULT_DEVICE_TYPE 319 else: 320 deviceType = config.store.deviceType 321 logger.debug("Device type is [%s]", deviceType) 322 return deviceType
    323 324 325 ########################### 326 # _getMediaType() function 327 ########################### 328
    329 -def _getMediaType(config):
    330 """ 331 Gets the media type that should be used for storing. 332 333 Use the configured media type if not C{None}, otherwise use 334 C{DEFAULT_MEDIA_TYPE}. 335 336 Once we figure out what configuration value to use, we return a media type 337 value that is valid in one of the supported writers:: 338 339 MEDIA_CDR_74 340 MEDIA_CDRW_74 341 MEDIA_CDR_80 342 MEDIA_CDRW_80 343 MEDIA_DVDPLUSR 344 MEDIA_DVDPLUSRW 345 346 @param config: Config object. 347 348 @return: Media type to be used as a writer media type value. 349 @raise ValueError: If the media type is not valid. 350 """ 351 if config.store.mediaType is None: 352 mediaType = DEFAULT_MEDIA_TYPE 353 else: 354 mediaType = config.store.mediaType 355 if mediaType == "cdr-74": 356 logger.debug("Media type is MEDIA_CDR_74.") 357 return MEDIA_CDR_74 358 elif mediaType == "cdrw-74": 359 logger.debug("Media type is MEDIA_CDRW_74.") 360 return MEDIA_CDRW_74 361 elif mediaType == "cdr-80": 362 logger.debug("Media type is MEDIA_CDR_80.") 363 return MEDIA_CDR_80 364 elif mediaType == "cdrw-80": 365 logger.debug("Media type is MEDIA_CDRW_80.") 366 return MEDIA_CDRW_80 367 elif mediaType == "dvd+r": 368 logger.debug("Media type is MEDIA_DVDPLUSR.") 369 return MEDIA_DVDPLUSR 370 elif mediaType == "dvd+rw": 371 logger.debug("Media type is MEDIA_DVDPLUSRW.") 372 return MEDIA_DVDPLUSRW 373 else: 374 raise ValueError("Media type [%s] is not valid." % mediaType)
    375

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.writers.util-module.html0000664000175000017500000000403712642035643030130 0ustar pronovicpronovic00000000000000 util

    Module util


    Classes

    IsoImage

    Functions

    readMediaLabel
    validateDevice
    validateDriveSpeed
    validateScsiId

    Variables

    MKISOFS_COMMAND
    VOLNAME_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.sysinfo-pysrc.html0000664000175000017500000023551712642035645027540 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.sysinfo
    Package CedarBackup2 :: Package extend :: Module sysinfo
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.sysinfo

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2005,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Official Cedar Backup Extensions 
     30  # Purpose  : Provides an extension to save off important system recovery information. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Provides an extension to save off important system recovery information. 
     40   
     41  This is a simple Cedar Backup extension used to save off important system 
     42  recovery information.  It saves off three types of information: 
     43   
     44     - Currently-installed Debian packages via C{dpkg --get-selections} 
     45     - Disk partition information via C{fdisk -l} 
     46     - System-wide mounted filesystem contents, via C{ls -laR} 
     47   
     48  The saved-off information is placed into the collect directory and is 
     49  compressed using C{bzip2} to save space. 
     50   
     51  This extension relies on the options and collect configurations in the standard 
     52  Cedar Backup configuration file, but requires no new configuration of its own. 
     53  No public functions other than the action are exposed since all of this is 
     54  pretty simple. 
     55   
     56  @note: If the C{dpkg} or C{fdisk} commands cannot be found in their normal 
     57  locations or executed by the current user, those steps will be skipped and a 
     58  note will be logged at the INFO level. 
     59   
     60  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     61  """ 
     62   
     63  ######################################################################## 
     64  # Imported modules 
     65  ######################################################################## 
     66   
     67  # System modules 
     68  import os 
     69  import logging 
     70  from bz2 import BZ2File 
     71   
     72  # Cedar Backup modules 
     73  from CedarBackup2.util import resolveCommand, executeCommand, changeOwnership 
     74   
     75   
     76  ######################################################################## 
     77  # Module-wide constants and variables 
     78  ######################################################################## 
     79   
     80  logger = logging.getLogger("CedarBackup2.log.extend.sysinfo") 
     81   
     82  DPKG_PATH      = "/usr/bin/dpkg" 
     83  FDISK_PATH     = "/sbin/fdisk" 
     84   
     85  DPKG_COMMAND   = [ DPKG_PATH, "--get-selections", ] 
     86  FDISK_COMMAND  = [ FDISK_PATH, "-l", ] 
     87  LS_COMMAND     = [ "ls", "-laR", "/", ] 
     88   
     89   
     90  ######################################################################## 
     91  # Public functions 
     92  ######################################################################## 
     93   
     94  ########################### 
     95  # executeAction() function 
     96  ########################### 
     97   
    
    98 -def executeAction(configPath, options, config):
    99 """ 100 Executes the sysinfo backup action. 101 102 @param configPath: Path to configuration file on disk. 103 @type configPath: String representing a path on disk. 104 105 @param options: Program command-line options. 106 @type options: Options object. 107 108 @param config: Program configuration. 109 @type config: Config object. 110 111 @raise ValueError: Under many generic error conditions 112 @raise IOError: If the backup process fails for some reason. 113 """ 114 logger.debug("Executing sysinfo extended action.") 115 if config.options is None or config.collect is None: 116 raise ValueError("Cedar Backup configuration is not properly filled in.") 117 _dumpDebianPackages(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) 118 _dumpPartitionTable(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) 119 _dumpFilesystemContents(config.collect.targetDir, config.options.backupUser, config.options.backupGroup) 120 logger.info("Executed the sysinfo extended action successfully.")
    121
    122 -def _dumpDebianPackages(targetDir, backupUser, backupGroup, compress=True):
    123 """ 124 Dumps a list of currently installed Debian packages via C{dpkg}. 125 @param targetDir: Directory to write output file into. 126 @param backupUser: User which should own the resulting file. 127 @param backupGroup: Group which should own the resulting file. 128 @param compress: Indicates whether to compress the output file. 129 @raise IOError: If the dump fails for some reason. 130 """ 131 if not os.path.exists(DPKG_PATH): 132 logger.info("Not executing Debian package dump since %s doesn't seem to exist.", DPKG_PATH) 133 elif not os.access(DPKG_PATH, os.X_OK): 134 logger.info("Not executing Debian package dump since %s cannot be executed.", DPKG_PATH) 135 else: 136 (outputFile, filename) = _getOutputFile(targetDir, "dpkg-selections", compress) 137 try: 138 command = resolveCommand(DPKG_COMMAND) 139 result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile)[0] 140 if result != 0: 141 raise IOError("Error [%d] executing Debian package dump." % result) 142 finally: 143 outputFile.close() 144 if not os.path.exists(filename): 145 raise IOError("File [%s] does not seem to exist after Debian package dump finished." % filename) 146 changeOwnership(filename, backupUser, backupGroup)
    147
    148 -def _dumpPartitionTable(targetDir, backupUser, backupGroup, compress=True):
    149 """ 150 Dumps information about the partition table via C{fdisk}. 151 @param targetDir: Directory to write output file into. 152 @param backupUser: User which should own the resulting file. 153 @param backupGroup: Group which should own the resulting file. 154 @param compress: Indicates whether to compress the output file. 155 @raise IOError: If the dump fails for some reason. 156 """ 157 if not os.path.exists(FDISK_PATH): 158 logger.info("Not executing partition table dump since %s doesn't seem to exist.", FDISK_PATH) 159 elif not os.access(FDISK_PATH, os.X_OK): 160 logger.info("Not executing partition table dump since %s cannot be executed.", FDISK_PATH) 161 else: 162 (outputFile, filename) = _getOutputFile(targetDir, "fdisk-l", compress) 163 try: 164 command = resolveCommand(FDISK_COMMAND) 165 result = executeCommand(command, [], returnOutput=False, ignoreStderr=True, outputFile=outputFile)[0] 166 if result != 0: 167 raise IOError("Error [%d] executing partition table dump." % result) 168 finally: 169 outputFile.close() 170 if not os.path.exists(filename): 171 raise IOError("File [%s] does not seem to exist after partition table dump finished." % filename) 172 changeOwnership(filename, backupUser, backupGroup)
    173
    174 -def _dumpFilesystemContents(targetDir, backupUser, backupGroup, compress=True):
    175 """ 176 Dumps complete listing of filesystem contents via C{ls -laR}. 177 @param targetDir: Directory to write output file into. 178 @param backupUser: User which should own the resulting file. 179 @param backupGroup: Group which should own the resulting file. 180 @param compress: Indicates whether to compress the output file. 181 @raise IOError: If the dump fails for some reason. 182 """ 183 (outputFile, filename) = _getOutputFile(targetDir, "ls-laR", compress) 184 try: 185 # Note: can't count on return status from 'ls', so we don't check it. 186 command = resolveCommand(LS_COMMAND) 187 executeCommand(command, [], returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=outputFile) 188 finally: 189 outputFile.close() 190 if not os.path.exists(filename): 191 raise IOError("File [%s] does not seem to exist after filesystem contents dump finished." % filename) 192 changeOwnership(filename, backupUser, backupGroup)
    193
    194 -def _getOutputFile(targetDir, name, compress=True):
    195 """ 196 Opens the output file used for saving a dump to the filesystem. 197 198 The filename will be C{name.txt} (or C{name.txt.bz2} if C{compress} is 199 C{True}), written in the target directory. 200 201 @param targetDir: Target directory to write file in. 202 @param name: Name of the file to create. 203 @param compress: Indicates whether to write compressed output. 204 205 @return: Tuple of (Output file object, filename) 206 """ 207 filename = os.path.join(targetDir, "%s.txt" % name) 208 if compress: 209 filename = "%s.bz2" % filename 210 logger.debug("Dump file will be [%s].", filename) 211 if compress: 212 outputFile = BZ2File(filename, "w") 213 else: 214 outputFile = open(filename, "w") 215 return (outputFile, filename)
    216

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions-pysrc.html0000664000175000017500000002556612642035644026220 0ustar pronovicpronovic00000000000000 CedarBackup2.actions
    Package CedarBackup2 :: Package actions
    [hide private]
    [frames] | no frames]

    Source Code for Package CedarBackup2.actions

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Official Cedar Backup Extensions 
    14  # Purpose  : Provides package initialization 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Cedar Backup actions. 
    24   
    25  This package code related to the offical Cedar Backup actions (collect, 
    26  stage, store, purge, rebuild, and validate). 
    27   
    28  The action modules consist of mostly "glue" code that uses other lower-level 
    29  functionality to actually implement a backup.  There is one module for each 
    30  high-level backup action, plus a module that provides shared constants. 
    31   
    32  All of the public action function implement the Cedar Backup Extension 
    33  Architecture Interface, i.e. the same interface that extensions implement. 
    34   
    35  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    36  """ 
    37   
    38   
    39  ######################################################################## 
    40  # Package initialization 
    41  ######################################################################## 
    42   
    43  # Using 'from CedarBackup2.actions import *' will just import the modules listed 
    44  # in the __all__ variable. 
    45   
    46  __all__ = [ 'constants', 'collect', 'initialize', 'stage', 'store', 'purge', 'util', 'rebuild', 'validate', ] 
    47   
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.image-pysrc.html0000664000175000017500000002447512642035644025640 0ustar pronovicpronovic00000000000000 CedarBackup2.image
    Package CedarBackup2 :: Module image
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.image

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Cedar Backup, release 2 
    14  # Purpose  : Provides interface backwards compatibility. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Provides interface backwards compatibility. 
    24   
    25  In Cedar Backup 2.10.0, a refactoring effort took place while adding code to 
    26  support DVD hardware.  All of the writer functionality was moved to the 
    27  writers/ package.  This mostly-empty file remains to preserve the Cedar Backup 
    28  library interface. 
    29   
    30  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    31  """ 
    32   
    33  ######################################################################## 
    34  # Imported modules 
    35  ######################################################################## 
    36   
    37  from CedarBackup2.writers.util import IsoImage  # pylint: disable=W0611 
    38   
    

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.rebuild-pysrc.html0000664000175000017500000016456212642035644027645 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.rebuild
    Package CedarBackup2 :: Package actions :: Module rebuild
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.rebuild

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2004-2007,2010 Kenneth J. Pronovici. 
     12  # All rights reserved. 
     13  # 
     14  # This program is free software; you can redistribute it and/or 
     15  # modify it under the terms of the GNU General Public License, 
     16  # Version 2, as published by the Free Software Foundation. 
     17  # 
     18  # This program is distributed in the hope that it will be useful, 
     19  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     20  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     21  # 
     22  # Copies of the GNU General Public License are available from 
     23  # the Free Software Foundation website, http://www.gnu.org/. 
     24  # 
     25  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     26  # 
     27  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     28  # Language : Python 2 (>= 2.7) 
     29  # Project  : Cedar Backup, release 2 
     30  # Purpose  : Implements the standard 'rebuild' action. 
     31  # 
     32  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     33   
     34  ######################################################################## 
     35  # Module documentation 
     36  ######################################################################## 
     37   
     38  """ 
     39  Implements the standard 'rebuild' action. 
     40  @sort: executeRebuild 
     41  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     42  """ 
     43   
     44   
     45  ######################################################################## 
     46  # Imported modules 
     47  ######################################################################## 
     48   
     49  # System modules 
     50  import sys 
     51  import os 
     52  import logging 
     53  import datetime 
     54   
     55  # Cedar Backup modules 
     56  from CedarBackup2.util import deriveDayOfWeek 
     57  from CedarBackup2.actions.util import checkMediaState 
     58  from CedarBackup2.actions.constants import DIR_TIME_FORMAT, STAGE_INDICATOR 
     59  from CedarBackup2.actions.store import writeImage, writeStoreIndicator, consistencyCheck 
     60   
     61   
     62  ######################################################################## 
     63  # Module-wide constants and variables 
     64  ######################################################################## 
     65   
     66  logger = logging.getLogger("CedarBackup2.log.actions.rebuild") 
     67   
     68   
     69  ######################################################################## 
     70  # Public functions 
     71  ######################################################################## 
     72   
     73  ############################ 
     74  # executeRebuild() function 
     75  ############################ 
     76   
    
    77 -def executeRebuild(configPath, options, config):
    78 """ 79 Executes the rebuild backup action. 80 81 This function exists mainly to recreate a disc that has been "trashed" due 82 to media or hardware problems. Note that the "stage complete" indicator 83 isn't checked for this action. 84 85 Note that the rebuild action and the store action are very similar. The 86 main difference is that while store only stores a single day's staging 87 directory, the rebuild action operates on multiple staging directories. 88 89 @param configPath: Path to configuration file on disk. 90 @type configPath: String representing a path on disk. 91 92 @param options: Program command-line options. 93 @type options: Options object. 94 95 @param config: Program configuration. 96 @type config: Config object. 97 98 @raise ValueError: Under many generic error conditions 99 @raise IOError: If there are problems reading or writing files. 100 """ 101 logger.debug("Executing the 'rebuild' action.") 102 if sys.platform == "darwin": 103 logger.warn("Warning: the rebuild action is not fully supported on Mac OS X.") 104 logger.warn("See the Cedar Backup software manual for further information.") 105 if config.options is None or config.store is None: 106 raise ValueError("Rebuild configuration is not properly filled in.") 107 if config.store.checkMedia: 108 checkMediaState(config.store) # raises exception if media is not initialized 109 stagingDirs = _findRebuildDirs(config) 110 writeImage(config, True, stagingDirs) 111 if config.store.checkData: 112 if sys.platform == "darwin": 113 logger.warn("Warning: consistency check cannot be run successfully on Mac OS X.") 114 logger.warn("See the Cedar Backup software manual for further information.") 115 else: 116 logger.debug("Running consistency check of media.") 117 consistencyCheck(config, stagingDirs) 118 writeStoreIndicator(config, stagingDirs) 119 logger.info("Executed the 'rebuild' action successfully.")
    120 121 122 ######################################################################## 123 # Private utility functions 124 ######################################################################## 125 126 ############################## 127 # _findRebuildDirs() function 128 ############################## 129
    130 -def _findRebuildDirs(config):
    131 """ 132 Finds the set of directories to be included in a disc rebuild. 133 134 A the rebuild action is supposed to recreate the "last week's" disc. This 135 won't always be possible if some of the staging directories are missing. 136 However, the general procedure is to look back into the past no further than 137 the previous "starting day of week", and then work forward from there trying 138 to find all of the staging directories between then and now that still exist 139 and have a stage indicator. 140 141 @param config: Config object. 142 143 @return: Correct staging dir, as a dict mapping directory to date suffix. 144 @raise IOError: If we do not find at least one staging directory. 145 """ 146 stagingDirs = {} 147 start = deriveDayOfWeek(config.options.startingDay) 148 today = datetime.date.today() 149 if today.weekday() >= start: 150 days = today.weekday() - start + 1 151 else: 152 days = 7 - (start - today.weekday()) + 1 153 for i in range (0, days): 154 currentDay = today - datetime.timedelta(days=i) 155 dateSuffix = currentDay.strftime(DIR_TIME_FORMAT) 156 stageDir = os.path.join(config.store.sourceDir, dateSuffix) 157 indicator = os.path.join(stageDir, STAGE_INDICATOR) 158 if os.path.isdir(stageDir) and os.path.exists(indicator): 159 logger.info("Rebuild process will include stage directory [%s]", stageDir) 160 stagingDirs[stageDir] = dateSuffix 161 if len(stagingDirs) == 0: 162 raise IOError("Unable to find any staging directories for rebuild process.") 163 return stagingDirs
    164

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.capacity.LocalConfig-class.html0000664000175000017500000010622212642035644031754 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.capacity.LocalConfig
    Package CedarBackup2 :: Package extend :: Module capacity :: Class LocalConfig
    [hide private]
    [frames] | no frames]

    Class LocalConfig

    source code

    object --+
             |
            LocalConfig
    

    Class representing this extension's configuration document.

    This is not a general-purpose configuration object like the main Cedar Backup configuration object. Instead, it just knows how to parse and emit specific configuration values to this extension. Third parties who need to read and write configuration related to this extension should access it through the constructor, validate and addConfig methods.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, xmlData=None, xmlPath=None, validate=True)
    Initializes a configuration object.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    validate(self)
    Validates configuration represented by the object.
    source code
     
    addConfig(self, xmlDom, parentNode)
    Adds a <capacity> configuration section as the next child of a parent.
    source code
     
    _setCapacity(self, value)
    Property target used to set the capacity configuration value.
    source code
     
    _getCapacity(self)
    Property target used to get the capacity configuration value.
    source code
     
    _parseXmlData(self, xmlData)
    Internal method to parse an XML string into the object.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Static Methods [hide private]
     
    _parseCapacity(parentNode)
    Parses a capacity configuration section.
    source code
     
    _readPercentageQuantity(parent, name)
    Read a percentage quantity value from an XML document.
    source code
     
    _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity)
    Adds a text node as the next child of a parent, to contain a percentage quantity.
    source code
    Properties [hide private]
      capacity
    Capacity configuration in terms of a CapacityConfig object.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, xmlData=None, xmlPath=None, validate=True)
    (Constructor)

    source code 

    Initializes a configuration object.

    If you initialize the object without passing either xmlData or xmlPath then configuration will be empty and will be invalid until it is filled in properly.

    No reference to the original XML data or original path is saved off by this class. Once the data has been parsed (successfully or not) this original information is discarded.

    Unless the validate argument is False, the LocalConfig.validate method will be called (with its default arguments) against configuration after successfully parsing any passed-in XML. Keep in mind that even if validate is False, it might not be possible to parse the passed-in XML document if lower-level validations fail.

    Parameters:
    • xmlData (String data.) - XML data representing configuration.
    • xmlPath (Absolute path to a file on disk.) - Path to an XML file on disk.
    • validate (Boolean true/false.) - Validate the document after parsing it.
    Raises:
    • ValueError - If both xmlData and xmlPath are passed-in.
    • ValueError - If the XML data in xmlData or xmlPath cannot be parsed.
    • ValueError - If the parsed configuration document is not valid.
    Overrides: object.__init__

    Note: It is strongly suggested that the validate option always be set to True (the default) unless there is a specific need to read in invalid configuration from disk.

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    validate(self)

    source code 

    Validates configuration represented by the object. THere must be either a percentage, or a byte capacity, but not both.

    Raises:
    • ValueError - If one of the validations fails.

    addConfig(self, xmlDom, parentNode)

    source code 

    Adds a <capacity> configuration section as the next child of a parent.

    Third parties should use this function to write configuration related to this extension.

    We add the following fields to the document:

      maxPercentage  //cb_config/capacity/max_percentage
      minBytes       //cb_config/capacity/min_bytes
    
    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent that the section should be appended to.

    _setCapacity(self, value)

    source code 

    Property target used to set the capacity configuration value. If not None, the value must be a CapacityConfig object.

    Raises:
    • ValueError - If the value is not a CapacityConfig

    _parseXmlData(self, xmlData)

    source code 

    Internal method to parse an XML string into the object.

    This method parses the XML document into a DOM tree (xmlDom) and then calls a static method to parse the capacity configuration section.

    Parameters:
    • xmlData (String data) - XML data to be parsed
    Raises:
    • ValueError - If the XML cannot be successfully parsed.

    _parseCapacity(parentNode)
    Static Method

    source code 

    Parses a capacity configuration section.

    We read the following fields:

      maxPercentage  //cb_config/capacity/max_percentage
      minBytes       //cb_config/capacity/min_bytes
    
    Parameters:
    • parentNode - Parent node to search beneath.
    Returns:
    CapacityConfig object or None if the section does not exist.
    Raises:
    • ValueError - If some filled-in value is invalid.

    _readPercentageQuantity(parent, name)
    Static Method

    source code 

    Read a percentage quantity value from an XML document.

    Parameters:
    • parent - Parent node to search beneath.
    • name - Name of node to search for.
    Returns:
    Percentage quantity parsed from XML document

    _addPercentageQuantity(xmlDom, parentNode, nodeName, percentageQuantity)
    Static Method

    source code 

    Adds a text node as the next child of a parent, to contain a percentage quantity.

    If the percentageQuantity is None, then no node will be created.

    Parameters:
    • xmlDom - DOM tree as from impl.createDocument().
    • parentNode - Parent node to create child for.
    • nodeName - Name of the new container node.
    • percentageQuantity - PercentageQuantity object to put into the XML document
    Returns:
    Reference to the newly-created node.

    Property Details [hide private]

    capacity

    Capacity configuration in terms of a CapacityConfig object.

    Get Method:
    _getCapacity(self) - Property target used to get the capacity configuration value.
    Set Method:
    _setCapacity(self, value) - Property target used to set the capacity configuration value.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.tools.span.SpanOptions-class.html0000664000175000017500000003713012642035644031067 0ustar pronovicpronovic00000000000000 CedarBackup2.tools.span.SpanOptions
    Package CedarBackup2 :: Package tools :: Module span :: Class SpanOptions
    [hide private]
    [frames] | no frames]

    Class SpanOptions

    source code

     object --+    
              |    
    cli.Options --+
                  |
                 SpanOptions
    

    Tool-specific command-line options.

    Most of the cback command-line options are exactly what we need here -- logfile path, permissions, verbosity, etc. However, we need to make a few tweaks since we don't accept any actions.

    Also, a few extra command line options that we accept are really ignored underneath. I just don't care about that for a tool like this.

    Instance Methods [hide private]
     
    validate(self)
    Validates command-line options represented by the object.
    source code

    Inherited from cli.Options: __cmp__, __init__, __repr__, __str__, buildArgumentList, buildArgumentString

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from cli.Options: actions, config, debug, diagnostics, full, help, logfile, managed, managedOnly, mode, output, owner, quiet, stacktrace, verbose, version

    Inherited from object: __class__

    Method Details [hide private]

    validate(self)

    source code 

    Validates command-line options represented by the object. There are no validations here, because we don't use any actions.

    Raises:
    • ValueError - If one of the validations fails.
    Overrides: cli.Options.validate

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions.stage-module.html0000664000175000017500000000501612642035643030215 0ustar pronovicpronovic00000000000000 stage

    Module stage


    Functions

    executeStage

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.subversion.RepositoryDir-class.html0000664000175000017500000011727712642035644033030 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.subversion.RepositoryDir
    Package CedarBackup2 :: Package extend :: Module subversion :: Class RepositoryDir
    [hide private]
    [frames] | no frames]

    Class RepositoryDir

    source code

    object --+
             |
            RepositoryDir
    

    Class representing Subversion repository directory.

    A repository directory is a directory that contains one or more Subversion repositories.

    The following restrictions exist on data in this class:

    • The directory path must be absolute.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The compress mode must be one of the values in VALID_COMPRESS_MODES.

    The repository type value is kept around just for reference. It doesn't affect the behavior of the backup.

    Relative exclusions are allowed here. However, there is no configured ignore file, because repository dir backups are not recursive.

    Instance Methods [hide private]
     
    __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    Constructor for the RepositoryDir class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setRepositoryType(self, value)
    Property target used to set the repository type.
    source code
     
    _getRepositoryType(self)
    Property target used to get the repository type.
    source code
     
    _setDirectoryPath(self, value)
    Property target used to set the directory path.
    source code
     
    _getDirectoryPath(self)
    Property target used to get the repository path.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setRelativeExcludePaths(self, value)
    Property target used to set the relative exclude paths list.
    source code
     
    _getRelativeExcludePaths(self)
    Property target used to get the relative exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      directoryPath
    Absolute path of the Subversion parent directory.
      collectMode
    Overridden collect mode for this repository.
      compressMode
    Overridden compress mode for this repository.
      repositoryType
    Type of this repository, for reference.
      relativeExcludePaths
    List of relative paths to exclude.
      excludePatterns
    List of regular expression patterns to exclude.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, repositoryType=None, directoryPath=None, collectMode=None, compressMode=None, relativeExcludePaths=None, excludePatterns=None)
    (Constructor)

    source code 

    Constructor for the RepositoryDir class.

    Parameters:
    • repositoryType - Type of repository, for reference
    • directoryPath - Absolute path of the Subversion parent directory
    • collectMode - Overridden collect mode for this directory.
    • compressMode - Overridden compression mode for this directory.
    • relativeExcludePaths - List of relative paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setRepositoryType(self, value)

    source code 

    Property target used to set the repository type. There is no validation; this value is kept around just for reference.

    _setDirectoryPath(self, value)

    source code 

    Property target used to set the directory path. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of the values in VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setRelativeExcludePaths(self, value)

    source code 

    Property target used to set the relative exclude paths list. Elements do not have to exist on disk at the time of assignment.


    Property Details [hide private]

    directoryPath

    Absolute path of the Subversion parent directory.

    Get Method:
    _getDirectoryPath(self) - Property target used to get the repository path.
    Set Method:
    _setDirectoryPath(self, value) - Property target used to set the directory path.

    collectMode

    Overridden collect mode for this repository.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    compressMode

    Overridden compress mode for this repository.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    repositoryType

    Type of this repository, for reference.

    Get Method:
    _getRepositoryType(self) - Property target used to get the repository type.
    Set Method:
    _setRepositoryType(self, value) - Property target used to set the repository type.

    relativeExcludePaths

    List of relative paths to exclude.

    Get Method:
    _getRelativeExcludePaths(self) - Property target used to get the relative exclude paths list.
    Set Method:
    _setRelativeExcludePaths(self, value) - Property target used to set the relative exclude paths list.

    excludePatterns

    List of regular expression patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.postgresql-module.html0000664000175000017500000005520012642035643030361 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.postgresql
    Package CedarBackup2 :: Package extend :: Module postgresql
    [hide private]
    [frames] | no frames]

    Module postgresql

    source code

    Provides an extension to back up PostgreSQL databases.

    This is a Cedar Backup extension used to back up PostgreSQL databases via the Cedar Backup command line. It requires a new configurations section <postgresql> and is intended to be run either immediately before or immediately after the standard collect action. Aside from its own configuration, it requires the options and collect configuration sections in the standard Cedar Backup configuration file.

    The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate voodoo in the pg_hda.conf file.

    Note that this code always produces a full backup. There is currently no facility for making incremental backups.

    You should always make /etc/cback.conf unreadble to non-root users once you place postgresql configuration into it, since postgresql configuration will contain information about available PostgreSQL databases and usernames.

    Use of this extension may expose usernames in the process listing (via ps) when the backup is running if the username is specified in the configuration.


    Authors:
    Kenneth J. Pronovici <pronovic@ieee.org>, Antoine Beaupre <anarcat@koumbit.org>
    Classes [hide private]
      PostgresqlConfig
    Class representing PostgreSQL configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the PostgreSQL backup action.
    source code
     
    _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None)
    Backs up an individual PostgreSQL database, or all databases.
    source code
     
    _getOutputFile(targetDir, database, compressMode)
    Opens the output file used for saving the PostgreSQL dump.
    source code
     
    backupDatabase(user, backupFile, database=None)
    Backs up an individual PostgreSQL database, or all databases.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.postgresql")
      POSTGRESQLDUMP_COMMAND = ['pg_dump']
      POSTGRESQLDUMPALL_COMMAND = ['pg_dumpall']
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the PostgreSQL backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If a backup could not be written for some reason.

    _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None)

    source code 

    Backs up an individual PostgreSQL database, or all databases.

    This internal method wraps the public method and adds some functionality, like figuring out a filename, etc.

    Parameters:
    • targetDir - Directory into which backups should be written.
    • compressMode - Compress mode to be used for backed-up files.
    • user - User to use for connecting to the database.
    • backupUser - User to own resulting file.
    • backupGroup - Group to own resulting file.
    • database - Name of database, or None for all databases.
    Returns:
    Name of the generated backup file.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the PostgreSQL dump.

    _getOutputFile(targetDir, database, compressMode)

    source code 

    Opens the output file used for saving the PostgreSQL dump.

    The filename is either "postgresqldump.txt" or "postgresqldump-<database>.txt". The ".gz" or ".bz2" extension is added if compress is True.

    Parameters:
    • targetDir - Target directory to write file in.
    • database - Name of the database (if any)
    • compressMode - Compress mode to be used for backed-up files.
    Returns:
    Tuple of (Output file object, filename)

    backupDatabase(user, backupFile, database=None)

    source code 

    Backs up an individual PostgreSQL database, or all databases.

    This function backs up either a named local PostgreSQL database or all local PostgreSQL databases, using the passed in user for connectivity. This is always a full backup. There is no facility for incremental backups.

    The backup data will be written into the passed-in back file. Normally, this would be an object as returned from open(), but it is possible to use something like a GzipFile to write compressed output. The caller is responsible for closing the passed-in backup file.

    Parameters:
    • user (String representing PostgreSQL username.) - User to use for connecting to the database.
    • backupFile (Python file object as from open() or file().) - File use for writing backup.
    • database (String representing database name, or None for all databases.) - Name of the database to be backed up.
    Raises:
    • ValueError - If some value is missing or invalid.
    • IOError - If there is a problem executing the PostgreSQL dump.

    Note: Typically, you would use the root user to back up all databases.


    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions-module.html0000664000175000017500000000216212642035643027112 0ustar pronovicpronovic00000000000000 actions

    Module actions


    Variables


    [hide private] CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.extend.encrypt-module.html0000664000175000017500000000506212642035643030426 0ustar pronovicpronovic00000000000000 encrypt

    Module encrypt


    Classes

    EncryptConfig
    LocalConfig

    Functions

    executeAction

    Variables

    ENCRYPT_INDICATOR
    GPG_COMMAND
    VALID_ENCRYPT_MODES
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.testutil-module.html0000664000175000017500000014223312642035643026550 0ustar pronovicpronovic00000000000000 CedarBackup2.testutil
    Package CedarBackup2 :: Module testutil
    [hide private]
    [frames] | no frames]

    Module testutil

    source code

    Provides unit-testing utilities.

    These utilities are kept here, separate from util.py, because they provide common functionality that I do not want exported "publicly" once Cedar Backup is installed on a system. They are only used for unit testing, and are only useful within the source tree.

    Many of these functions are in here because they are "good enough" for unit test work but are not robust enough to be real public functions. Others (like removedir) do what they are supposed to, but I don't want responsibility for making them available to others.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Functions [hide private]
     
    findResources(resources, dataDirs)
    Returns a dictionary of locations for various resources.
    source code
     
    commandAvailable(command)
    Indicates whether a command is available on $PATH somewhere.
    source code
     
    buildPath(components)
    Builds a complete path from a list of components.
    source code
     
    removedir(tree)
    Recursively removes an entire directory.
    source code
     
    extractTar(tmpdir, filepath)
    Extracts the indicated tar file to the indicated tmpdir.
    source code
     
    changeFileAge(filename, subtract=None)
    Changes a file age using the os.utime function.
    source code
     
    getMaskAsMode()
    Returns the user's current umask inverted to a mode.
    source code
     
    getLogin()
    Returns the name of the currently-logged in user.
    source code
     
    failUnlessAssignRaises(testCase, exception, obj, prop, value)
    Equivalent of failUnlessRaises, but used for property assignments instead.
    source code
     
    runningAsRoot()
    Returns boolean indicating whether the effective user id is root.
    source code
     
    platformDebian()
    Returns boolean indicating whether this is the Debian platform.
    source code
     
    platformMacOsX()
    Returns boolean indicating whether this is the Mac OS X platform.
    source code
     
    platformCygwin()
    Returns boolean indicating whether this is the Cygwin platform.
    source code
     
    platformWindows()
    Returns boolean indicating whether this is the Windows platform.
    source code
     
    platformHasEcho()
    Returns boolean indicating whether the platform has a sensible echo command.
    source code
     
    platformSupportsLinks()
    Returns boolean indicating whether the platform supports soft-links.
    source code
     
    platformSupportsPermissions()
    Returns boolean indicating whether the platform supports UNIX-style file permissions.
    source code
     
    platformRequiresBinaryRead()
    Returns boolean indicating whether the platform requires binary reads.
    source code
     
    setupDebugLogger()
    Sets up a screen logger for debugging purposes.
    source code
     
    setupOverrides()
    Set up any platform-specific overrides that might be required.
    source code
     
    randomFilename(length, prefix=None, suffix=None)
    Generates a random filename with the given length.
    source code
     
    captureOutput(c)
    Captures the output (stdout, stderr) of a function or a method.
    source code
     
    _isPlatform(name)
    Returns boolean indicating whether we're running on the indicated platform.
    source code
     
    availableLocales()
    Returns a list of available locales on the system
    source code
     
    hexFloatLiteralAllowed()
    Indicates whether hex float literals are allowed by the interpreter.
    source code
    Variables [hide private]
      __package__ = 'CedarBackup2'
    Function Details [hide private]

    findResources(resources, dataDirs)

    source code 

    Returns a dictionary of locations for various resources.

    Parameters:
    • resources - List of required resources.
    • dataDirs - List of data directories to search within for resources.
    Returns:
    Dictionary mapping resource name to resource path.
    Raises:
    • Exception - If some resource cannot be found.

    commandAvailable(command)

    source code 

    Indicates whether a command is available on $PATH somewhere. This should work on both Windows and UNIX platforms.

    Parameters:
    • command - Commang to search for
    Returns:
    Boolean true/false depending on whether command is available.

    buildPath(components)

    source code 

    Builds a complete path from a list of components. For instance, constructs "/a/b/c" from ["/a", "b", "c",].

    Parameters:
    • components - List of components.
    Returns:
    String path constructed from components.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    removedir(tree)

    source code 

    Recursively removes an entire directory. This is basically taken from an example on python.com.

    Parameters:
    • tree - Directory tree to remove.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    extractTar(tmpdir, filepath)

    source code 

    Extracts the indicated tar file to the indicated tmpdir.

    Parameters:
    • tmpdir - Temp directory to extract to.
    • filepath - Path to tarfile to extract.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    changeFileAge(filename, subtract=None)

    source code 

    Changes a file age using the os.utime function.

    Parameters:
    • filename - File to operate on.
    • subtract - Number of seconds to subtract from the current time.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    Note: Some platforms don't seem to be able to set an age precisely. As a result, whereas we might have intended to set an age of 86400 seconds, we actually get an age of 86399.375 seconds. When util.calculateFileAge() looks at that the file, it calculates an age of 0.999992766204 days, which then gets truncated down to zero whole days. The tests get very confused. To work around this, I always subtract off one additional second as a fudge factor. That way, the file age will be at least as old as requested later on.

    getMaskAsMode()

    source code 

    Returns the user's current umask inverted to a mode. A mode is mostly a bitwise inversion of a mask, i.e. mask 002 is mode 775.

    Returns:
    Umask converted to a mode, as an integer.

    getLogin()

    source code 

    Returns the name of the currently-logged in user. This might fail under some circumstances - but if it does, our tests would fail anyway.

    failUnlessAssignRaises(testCase, exception, obj, prop, value)

    source code 

    Equivalent of failUnlessRaises, but used for property assignments instead.

    It's nice to be able to use failUnlessRaises to check that a method call raises the exception that you expect. Unfortunately, this method can't be used to check Python propery assignments, even though these property assignments are actually implemented underneath as methods.

    This function (which can be easily called by unit test classes) provides an easy way to wrap the assignment checks. It's not pretty, or as intuitive as the original check it's modeled on, but it does work.

    Let's assume you make this method call:

      testCase.failUnlessAssignRaises(ValueError, collectDir, "absolutePath", absolutePath)
    

    If you do this, a test case failure will be raised unless the assignment:

      collectDir.absolutePath = absolutePath
    

    fails with a ValueError exception. The failure message differentiates between the case where no exception was raised and the case where the wrong exception was raised.

    Parameters:
    • testCase - PyUnit test case object (i.e. self).
    • exception - Exception that is expected to be raised.
    • obj - Object whose property is to be assigned to.
    • prop - Name of the property, as a string.
    • value - Value that is to be assigned to the property.

    Note: Internally, the missed and instead variables are used rather than directly calling testCase.fail upon noticing a problem because the act of "failure" itself generates an exception that would be caught by the general except clause.

    See Also: unittest.TestCase.failUnlessRaises

    runningAsRoot()

    source code 

    Returns boolean indicating whether the effective user id is root. This is always true on platforms that have no concept of root, like Windows.

    platformHasEcho()

    source code 

    Returns boolean indicating whether the platform has a sensible echo command. On some platforms, like Windows, echo doesn't really work for tests.

    platformSupportsLinks()

    source code 

    Returns boolean indicating whether the platform supports soft-links. Some platforms, like Windows, do not support links, and tests need to take this into account.

    platformSupportsPermissions()

    source code 

    Returns boolean indicating whether the platform supports UNIX-style file permissions. Some platforms, like Windows, do not support permissions, and tests need to take this into account.

    platformRequiresBinaryRead()

    source code 

    Returns boolean indicating whether the platform requires binary reads. Some platforms, like Windows, require a special flag to read binary data from files.

    setupDebugLogger()

    source code 

    Sets up a screen logger for debugging purposes.

    Normally, the CLI functionality configures the logger so that things get written to the right place. However, for debugging it's sometimes nice to just get everything -- debug information and output -- dumped to the screen. This function takes care of that.

    setupOverrides()

    source code 

    Set up any platform-specific overrides that might be required.

    When packages are built, this is done manually (hardcoded) in customize.py and the overrides are set up in cli.cli(). This way, no runtime checks need to be done. This is safe, because the package maintainer knows exactly which platform (Debian or not) the package is being built for.

    Unit tests are different, because they might be run anywhere. So, we attempt to make a guess about plaform using platformDebian(), and use that to set up the custom overrides so that platform-specific unit tests continue to work.

    randomFilename(length, prefix=None, suffix=None)

    source code 

    Generates a random filename with the given length.

    Parameters:
    • length - Length of filename. @return Random filename.

    captureOutput(c)

    source code 

    Captures the output (stdout, stderr) of a function or a method.

    Some of our functions don't do anything other than just print output. We need a way to test these functions (at least nominally) but we don't want any of the output spoiling the test suite output.

    This function just creates a dummy file descriptor that can be used as a target by the callable function, rather than stdout or stderr.

    Parameters:
    • c - Callable function or method.
    Returns:
    Output of function, as one big string.

    Note: This method assumes that callable doesn't take any arguments besides keyword argument fd to specify the file descriptor.

    _isPlatform(name)

    source code 

    Returns boolean indicating whether we're running on the indicated platform.

    Parameters:
    • name - Platform name to check, currently one of "windows" or "macosx"

    availableLocales()

    source code 

    Returns a list of available locales on the system

    Returns:
    List of string locale names

    hexFloatLiteralAllowed()

    source code 

    Indicates whether hex float literals are allowed by the interpreter.

    As far back as 2004, some Python documentation indicated that octal and hex notation applied only to integer literals. However, prior to Python 2.5, it was legal to construct a float with an argument like 0xAC on some platforms. This check provides a an indication of whether the current interpreter supports that behavior.

    This check exists so that unit tests can continue to test the same thing as always for pre-2.5 interpreters (i.e. making sure backwards compatibility doesn't break) while still continuing to work for later interpreters.

    The returned value is True if hex float literals are allowed, False otherwise.


    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.customize-module.html0000664000175000017500000000323312642035643027474 0ustar pronovicpronovic00000000000000 customize

    Module customize


    Functions

    customizeOverrides

    Variables

    DEBIAN_CDRECORD
    DEBIAN_MKISOFS
    PLATFORM
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.ExtensionsConfig-class.html0000664000175000017500000005616712642035644031257 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ExtensionsConfig
    Package CedarBackup2 :: Module config :: Class ExtensionsConfig
    [hide private]
    [frames] | no frames]

    Class ExtensionsConfig

    source code

    object --+
             |
            ExtensionsConfig
    

    Class representing Cedar Backup extensions configuration.

    Extensions configuration is used to specify "extended actions" implemented by code external to Cedar Backup. For instance, a hypothetical third party might write extension code to collect database repository data. If they write a properly-formatted extension function, they can use the extension configuration to map a command-line Cedar Backup action (i.e. "database") to their function.

    The following restrictions exist on data in this class:

    • If set, the order mode must be one of the values in VALID_ORDER_MODES
    • The actions list must be a list of ExtendedAction objects.
    Instance Methods [hide private]
     
    __init__(self, actions=None, orderMode=None)
    Constructor for the ExtensionsConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setOrderMode(self, value)
    Property target used to set the order mode.
    source code
     
    _getOrderMode(self)
    Property target used to get the order mode.
    source code
     
    _setActions(self, value)
    Property target used to set the actions list.
    source code
     
    _getActions(self)
    Property target used to get the actions list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      orderMode
    Order mode for extensions, to control execution ordering.
      actions
    List of extended actions.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, actions=None, orderMode=None)
    (Constructor)

    source code 

    Constructor for the ExtensionsConfig class.

    Parameters:
    • actions - List of extended actions
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setOrderMode(self, value)

    source code 

    Property target used to set the order mode. The value must be one of VALID_ORDER_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setActions(self, value)

    source code 

    Property target used to set the actions list. Either the value must be None or each element must be an ExtendedAction.

    Raises:
    • ValueError - If the value is not a ExtendedAction

    Property Details [hide private]

    orderMode

    Order mode for extensions, to control execution ordering.

    Get Method:
    _getOrderMode(self) - Property target used to get the order mode.
    Set Method:
    _setOrderMode(self, value) - Property target used to set the order mode.

    actions

    List of extended actions.

    Get Method:
    _getActions(self) - Property target used to get the actions list.
    Set Method:
    _setActions(self, value) - Property target used to set the actions list.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.cdwriter.CdWriter-class.html0000664000175000017500000032543312642035644031564 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.cdwriter.CdWriter
    Package CedarBackup2 :: Package writers :: Module cdwriter :: Class CdWriter
    [hide private]
    [frames] | no frames]

    Class CdWriter

    source code

    object --+
             |
            CdWriter
    

    Class representing a device that knows how to write CD media.

    Summary

    This is a class representing a device that knows how to write CD media. It provides common operations for the device, such as ejecting the media, writing an ISO image to the media, or checking for the current media capacity. It also provides a place to store device attributes, such as whether the device supports writing multisession discs, etc.

    This class is implemented in terms of the eject and cdrecord programs, both of which should be available on most UN*X platforms.

    Image Writer Interface

    The following methods make up the "image writer" interface shared with other kinds of writers (such as DVD writers):

      __init__
      initializeImage()
      addImageEntry()
      writeImage()
      setImageNewDisc()
      retrieveCapacity()
      getEstimatedImageSize()
    

    Only these methods will be used by other Cedar Backup functionality that expects a compatible image writer.

    The media attribute is also assumed to be available.

    Media Types

    This class knows how to write to two different kinds of media, represented by the following constants:

    • MEDIA_CDR_74: 74-minute CD-R media (650 MB capacity)
    • MEDIA_CDRW_74: 74-minute CD-RW media (650 MB capacity)
    • MEDIA_CDR_80: 80-minute CD-R media (700 MB capacity)
    • MEDIA_CDRW_80: 80-minute CD-RW media (700 MB capacity)

    Most hardware can read and write both 74-minute and 80-minute CD-R and CD-RW media. Some older drives may only be able to write CD-R media. The difference between the two is that CD-RW media can be rewritten (erased), while CD-R media cannot be.

    I do not support any other configurations for a couple of reasons. The first is that I've never tested any other kind of media. The second is that anything other than 74 or 80 minute is apparently non-standard.

    Device Attributes vs. Media Attributes

    A given writer instance has two different kinds of attributes associated with it, which I call device attributes and media attributes. Device attributes are things which can be determined without looking at the media, such as whether the drive supports writing multisession disks or has a tray. Media attributes are attributes which vary depending on the state of the media, such as the remaining capacity on a disc. In general, device attributes are available via instance variables and are constant over the life of an object, while media attributes can be retrieved through method calls.

    Talking to Hardware

    This class needs to talk to CD writer hardware in two different ways: through cdrecord to actually write to the media, and through the filesystem to do things like open and close the tray.

    Historically, CdWriter has interacted with cdrecord using the scsiId attribute, and with most other utilities using the device attribute. This changed somewhat in Cedar Backup 2.9.0.

    When Cedar Backup was first written, the only way to interact with cdrecord was by using a SCSI device id. IDE devices were mapped to pseudo-SCSI devices through the kernel. Later, extended SCSI "methods" arrived, and it became common to see ATA:1,0,0 or ATAPI:0,0,0 as a way to address IDE hardware. By late 2006, ATA and ATAPI had apparently been deprecated in favor of just addressing the IDE device directly by name, i.e. /dev/cdrw.

    Because of this latest development, it no longer makes sense to require a CdWriter to be created with a SCSI id -- there might not be one. So, the passed-in SCSI id is now optional. Also, there is now a hardwareId attribute. This attribute is filled in with either the SCSI id (if provided) or the device (otherwise). The hardware id is the value that will be passed to cdrecord in the dev= argument.

    Testing

    It's rather difficult to test this code in an automated fashion, even if you have access to a physical CD writer drive. It's even more difficult to test it if you are running on some build daemon (think of a Debian autobuilder) which can't be expected to have any hardware or any media that you could write to.

    Because of this, much of the implementation below is in terms of static methods that are supposed to take defined actions based on their arguments. Public methods are then implemented in terms of a series of calls to simplistic static methods. This way, we can test as much as possible of the functionality via testing the static methods, while hoping that if the static methods are called appropriately, things will work properly. It's not perfect, but it's much better than no testing at all.

    Instance Methods [hide private]
     
    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=1, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    Initializes a CD writer object.
    source code
     
    isRewritable(self)
    Indicates whether the media is rewritable per configuration.
    source code
     
    _retrieveProperties(self)
    Retrieves properties for a device from cdrecord.
    source code
     
    retrieveCapacity(self, entireDisc=False, useMulti=True)
    Retrieves capacity for the current media in terms of a MediaCapacity object.
    source code
     
    _getBoundaries(self, entireDisc=False, useMulti=True)
    Gets the ISO boundaries for the media.
    source code
     
    openTray(self)
    Opens the device's tray and leaves it open.
    source code
     
    closeTray(self)
    Closes the device's tray.
    source code
     
    refreshMedia(self)
    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.
    source code
     
    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)
    Writes an ISO image to the media in the device.
    source code
     
    _blankMedia(self)
    Blanks the media in the device, if the media is rewritable.
    source code
     
    initializeImage(self, newDisc, tmpdir, mediaLabel=None)
    Initializes the writer's associated ISO image.
    source code
     
    addImageEntry(self, path, graftPoint)
    Adds a filepath entry to the writer's associated ISO image.
    source code
     
    setImageNewDisc(self, newDisc)
    Resets (overrides) the newDisc flag on the internal image.
    source code
     
    getEstimatedImageSize(self)
    Gets the estimated size of the image associated with the writer.
    source code
     
    _getDevice(self)
    Property target used to get the device value.
    source code
     
    _getScsiId(self)
    Property target used to get the SCSI id value.
    source code
     
    _getHardwareId(self)
    Property target used to get the hardware id value.
    source code
     
    _getDriveSpeed(self)
    Property target used to get the drive speed.
    source code
     
    _getMedia(self)
    Property target used to get the media description.
    source code
     
    _getDeviceType(self)
    Property target used to get the device type.
    source code
     
    _getDeviceVendor(self)
    Property target used to get the device vendor.
    source code
     
    _getDeviceId(self)
    Property target used to get the device id.
    source code
     
    _getDeviceBufferSize(self)
    Property target used to get the device buffer size.
    source code
     
    _getDeviceSupportsMulti(self)
    Property target used to get the device-support-multi flag.
    source code
     
    _getDeviceHasTray(self)
    Property target used to get the device-has-tray flag.
    source code
     
    _getDeviceCanEject(self)
    Property target used to get the device-can-eject flag.
    source code
     
    _getRefreshMediaDelay(self)
    Property target used to get the configured refresh media delay, in seconds.
    source code
     
    _getEjectDelay(self)
    Property target used to get the configured eject delay, in seconds.
    source code
     
    unlockTray(self)
    Unlocks the device's tray.
    source code
     
    _createImage(self)
    Creates an ISO image based on configuration in self._image.
    source code
     
    _writeImage(self, imagePath, writeMulti, newDisc)
    Write an ISO image to disc using cdrecord.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _calculateCapacity(media, boundaries)
    Calculates capacity for the media in terms of boundaries.
    source code
     
    _parsePropertiesOutput(output)
    Parses the output from a cdrecord properties command.
    source code
     
    _parseBoundariesOutput(output)
    Parses the output from a cdrecord capacity command.
    source code
     
    _buildOpenTrayArgs(device)
    Builds a list of arguments to be passed to a eject command.
    source code
     
    _buildCloseTrayArgs(device)
    Builds a list of arguments to be passed to a eject command.
    source code
     
    _buildPropertiesArgs(hardwareId)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildBoundariesArgs(hardwareId)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildBlankArgs(hardwareId, driveSpeed=None)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True)
    Builds a list of arguments to be passed to a cdrecord command.
    source code
     
    _buildUnlockTrayArgs(device)
    Builds a list of arguments to be passed to a eject command.
    source code
    Properties [hide private]
      device
    Filesystem device name for this writer.
      scsiId
    SCSI id for the device, in the form [<method>:]scsibus,target,lun.
      hardwareId
    Hardware id for this writer, either SCSI id or device path.
      driveSpeed
    Speed at which the drive writes.
      media
    Definition of media that is expected to be in the device.
      deviceType
    Type of the device, as returned from cdrecord -prcap.
      deviceVendor
    Vendor of the device, as returned from cdrecord -prcap.
      deviceId
    Device identification, as returned from cdrecord -prcap.
      deviceBufferSize
    Size of the device's write buffer, in bytes.
      deviceSupportsMulti
    Indicates whether device supports multisession discs.
      deviceHasTray
    Indicates whether the device has a media tray.
      deviceCanEject
    Indicates whether the device supports ejecting its media.
      refreshMediaDelay
    Refresh media delay, in seconds.
      ejectDelay
    Eject delay, in seconds.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, device, scsiId=None, driveSpeed=None, mediaType=1, noEject=False, refreshMediaDelay=0, ejectDelay=0, unittest=False)
    (Constructor)

    source code 

    Initializes a CD writer object.

    The current user must have write access to the device at the time the object is instantiated, or an exception will be thrown. However, no media-related validation is done, and in fact there is no need for any media to be in the drive until one of the other media attribute-related methods is called.

    The various instance variables such as deviceType, deviceVendor, etc. might be None, if we're unable to parse this specific information from the cdrecord output. This information is just for reference.

    The SCSI id is optional, but the device path is required. If the SCSI id is passed in, then the hardware id attribute will be taken from the SCSI id. Otherwise, the hardware id will be taken from the device.

    If cdrecord improperly detects whether your writer device has a tray and can be safely opened and closed, then pass in noEject=False. This will override the properties and the device will never be ejected.

    Parameters:
    • device (Absolute path to a filesystem device, i.e. /dev/cdrw) - Filesystem device associated with this writer.
    • scsiId (If provided, SCSI id in the form [<method>:]scsibus,target,lun) - SCSI id for the device (optional).
    • driveSpeed (Use 2 for 2x device, etc. or None to use device default.) - Speed at which the drive writes.
    • mediaType (One of the valid media type as discussed above.) - Type of the media that is assumed to be in the drive.
    • noEject (Boolean true/false) - Overrides properties to indicate that the device does not support eject.
    • refreshMediaDelay (Number of seconds, an integer >= 0) - Refresh media delay to use, if any
    • ejectDelay (Number of seconds, an integer >= 0) - Eject delay to use, if any
    • unittest (Boolean true/false) - Turns off certain validations, for use in unit testing.
    Raises:
    • ValueError - If the device is not valid for some reason.
    • ValueError - If the SCSI id is not in a valid form.
    • ValueError - If the drive speed is not an integer >= 1.
    • IOError - If device properties could not be read for some reason.
    Overrides: object.__init__

    Note: The unittest parameter should never be set to True outside of Cedar Backup code. It is intended for use in unit testing Cedar Backup internals and has no other sensible purpose.

    _retrieveProperties(self)

    source code 

    Retrieves properties for a device from cdrecord.

    The results are returned as a tuple of the object device attributes as returned from _parsePropertiesOutput: (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject).

    Returns:
    Results tuple as described above.
    Raises:
    • IOError - If there is a problem talking to the device.

    retrieveCapacity(self, entireDisc=False, useMulti=True)

    source code 

    Retrieves capacity for the current media in terms of a MediaCapacity object.

    If entireDisc is passed in as True the capacity will be for the entire disc, as if it were to be rewritten from scratch. If the drive does not support writing multisession discs or if useMulti is passed in as False, the capacity will also be as if the disc were to be rewritten from scratch, but the indicated boundaries value will be None. The same will happen if the disc cannot be read for some reason. Otherwise, the capacity (including the boundaries) will represent whatever space remains on the disc to be filled by future sessions.

    Parameters:
    • entireDisc (Boolean true/false) - Indicates whether to return capacity for entire disc.
    • useMulti (Boolean true/false) - Indicates whether a multisession disc should be assumed, if possible.
    Returns:
    MediaCapacity object describing the capacity of the media.
    Raises:
    • IOError - If the media could not be read for some reason.

    _getBoundaries(self, entireDisc=False, useMulti=True)

    source code 

    Gets the ISO boundaries for the media.

    If entireDisc is passed in as True the boundaries will be None, as if the disc were to be rewritten from scratch. If the drive does not support writing multisession discs, the returned value will be None. The same will happen if the disc can't be read for some reason. Otherwise, the returned value will be represent the boundaries of the disc's current contents.

    The results are returned as a tuple of (lower, upper) as needed by the IsoImage class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however.

    Parameters:
    • entireDisc (Boolean true/false) - Indicates whether to return capacity for entire disc.
    • useMulti (Boolean true/false) - Indicates whether a multisession disc should be assumed, if possible.
    Returns:
    Boundaries tuple or None, as described above.
    Raises:
    • IOError - If the media could not be read for some reason.

    _calculateCapacity(media, boundaries)
    Static Method

    source code 

    Calculates capacity for the media in terms of boundaries.

    If boundaries is None or the lower bound is 0 (zero), then the capacity will be for the entire disc minus the initial lead in. Otherwise, capacity will be as if the caller wanted to add an additional session to the end of the existing data on the disc.

    Parameters:
    • media - MediaDescription object describing the media capacity.
    • boundaries - Session boundaries as returned from _getBoundaries.
    Returns:
    MediaCapacity object describing the capacity of the media.

    openTray(self)

    source code 

    Opens the device's tray and leaves it open.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    If the writer was constructed with noEject=True, then this is a no-op.

    Starting with Debian wheezy on my backup hardware, I started seeing consistent problems with the eject command. I couldn't tell whether these problems were due to the device management system or to the new kernel (3.2.0). Initially, I saw simple eject failures, possibly because I was opening and closing the tray too quickly. I worked around that behavior with the new ejectDelay flag.

    Later, I sometimes ran into issues after writing an image to a disc: eject would give errors like "unable to eject, last error: Inappropriate ioctl for device". Various sources online (like Ubuntu bug #875543) suggested that the drive was being locked somehow, and that the workaround was to run 'eject -i off' to unlock it. Sure enough, that fixed the problem for me, so now it's a normal error-handling strategy.

    Raises:
    • IOError - If there is an error talking to the device.

    closeTray(self)

    source code 

    Closes the device's tray.

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing.

    If the writer was constructed with noEject=True, then this is a no-op.

    Raises:
    • IOError - If there is an error talking to the device.

    refreshMedia(self)

    source code 

    Opens and then immediately closes the device's tray, to refresh the device's idea of the media.

    Sometimes, a device gets confused about the state of its media. Often, all it takes to solve the problem is to eject the media and then immediately reload it. (There are also configurable eject and refresh media delays which can be applied, for situations where this makes a difference.)

    This only works if the device has a tray and supports ejecting its media. We have no way to know if the tray is currently open or closed, so we just send the appropriate command and hope for the best. If the device does not have a tray or does not support ejecting its media, then we do nothing. The configured delays still apply, though.

    Raises:
    • IOError - If there is an error talking to the device.

    writeImage(self, imagePath=None, newDisc=False, writeMulti=True)

    source code 

    Writes an ISO image to the media in the device.

    If newDisc is passed in as True, we assume that the entire disc will be overwritten, and the media will be blanked before writing it if possible (i.e. if the media is rewritable).

    If writeMulti is passed in as True, then a multisession disc will be written if possible (i.e. if the drive supports writing multisession discs).

    if imagePath is passed in as None, then the existing image configured with initializeImage will be used. Under these circumstances, the passed-in newDisc flag will be ignored.

    By default, we assume that the disc can be written multisession and that we should append to the current contents of the disc. In any case, the ISO image must be generated appropriately (i.e. must take into account any existing session boundaries, etc.)

    Parameters:
    • imagePath (String representing a path on disk) - Path to an ISO image on disk, or None to use writer's image
    • newDisc (Boolean true/false.) - Indicates whether the entire disc will overwritten.
    • writeMulti (Boolean true/false) - Indicates whether a multisession disc should be written, if possible.
    Raises:
    • ValueError - If the image path is not absolute.
    • ValueError - If some path cannot be encoded properly.
    • IOError - If the media could not be written to for some reason.
    • ValueError - If no image is passed in and initializeImage() was not previously called

    _blankMedia(self)

    source code 

    Blanks the media in the device, if the media is rewritable.

    Raises:
    • IOError - If the media could not be written to for some reason.

    _parsePropertiesOutput(output)
    Static Method

    source code 

    Parses the output from a cdrecord properties command.

    The output parameter should be a list of strings as returned from executeCommand for a cdrecord command with arguments as from _buildPropertiesArgs. The list of strings will be parsed to yield information about the properties of the device.

    The output is expected to be a huge long list of strings. Unfortunately, the strings aren't in a completely regular format. However, the format of individual lines seems to be regular enough that we can look for specific values. Two kinds of parsing take place: one kind of parsing picks out out specific values like the device id, device vendor, etc. The other kind of parsing just sets a boolean flag True if a matching line is found. All of the parsing is done with regular expressions.

    Right now, pretty much nothing in the output is required and we should parse an empty document successfully (albeit resulting in a device that can't eject, doesn't have a tray and doesnt't support multisession discs). I had briefly considered erroring out if certain lines weren't found or couldn't be parsed, but that seems like a bad idea given that most of the information is just for reference.

    The results are returned as a tuple of the object device attributes: (deviceType, deviceVendor, deviceId, deviceBufferSize, deviceSupportsMulti, deviceHasTray, deviceCanEject).

    Parameters:
    • output - Output from a cdrecord -prcap command.
    Returns:
    Results tuple as described above.
    Raises:
    • IOError - If there is problem parsing the output.

    _parseBoundariesOutput(output)
    Static Method

    source code 

    Parses the output from a cdrecord capacity command.

    The output parameter should be a list of strings as returned from executeCommand for a cdrecord command with arguments as from _buildBoundaryArgs. The list of strings will be parsed to yield information about the capacity of the media in the device.

    Basically, we expect the list of strings to include just one line, a pair of values. There isn't supposed to be whitespace, but we allow it anyway in the regular expression. Any lines below the one line we parse are completely ignored. It would be a good idea to ignore stderr when executing the cdrecord command that generates output for this method, because sometimes cdrecord spits out kernel warnings about the actual output.

    The results are returned as a tuple of (lower, upper) as needed by the IsoImage class. Note that these values are in terms of ISO sectors, not bytes. Clients should generally consider the boundaries value opaque, however.

    Parameters:
    • output - Output from a cdrecord -msinfo command.
    Returns:
    Boundaries tuple as described above.
    Raises:
    • IOError - If there is problem parsing the output.

    Note: If the boundaries output can't be parsed, we return None.

    _buildOpenTrayArgs(device)
    Static Method

    source code 

    Builds a list of arguments to be passed to a eject command.

    The arguments will cause the eject command to open the tray and eject the media. No validation is done by this method as to whether this action actually makes sense.

    Parameters:
    • device - Filesystem device name for this writer, i.e. /dev/cdrw.
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildCloseTrayArgs(device)
    Static Method

    source code 

    Builds a list of arguments to be passed to a eject command.

    The arguments will cause the eject command to close the tray and reload the media. No validation is done by this method as to whether this action actually makes sense.

    Parameters:
    • device - Filesystem device name for this writer, i.e. /dev/cdrw.
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildPropertiesArgs(hardwareId)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to ask the device for a list of its capacities via the -prcap switch.

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildBoundariesArgs(hardwareId)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to ask the device for the current multisession boundaries of the media using the -msinfo switch.

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildBlankArgs(hardwareId, driveSpeed=None)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to blank the media in the device identified by hardwareId. No validation is done by this method as to whether the action makes sense (i.e. to whether the media even can be blanked).

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    • driveSpeed - Speed at which the drive writes.
    Returns:
    List suitable for passing to util.executeCommand as args.

    _buildWriteArgs(hardwareId, imagePath, driveSpeed=None, writeMulti=True)
    Static Method

    source code 

    Builds a list of arguments to be passed to a cdrecord command.

    The arguments will cause the cdrecord command to write the indicated ISO image (imagePath) to the media in the device identified by hardwareId. The writeMulti argument controls whether to write a multisession disc. No validation is done by this method as to whether the action makes sense (i.e. to whether the device even can write multisession discs, for instance).

    Parameters:
    • hardwareId - Hardware id for the device (either SCSI id or device path)
    • imagePath - Path to an ISO image on disk.
    • driveSpeed - Speed at which the drive writes.
    • writeMulti - Indicates whether to write a multisession disc.
    Returns:
    List suitable for passing to util.executeCommand as args.

    initializeImage(self, newDisc, tmpdir, mediaLabel=None)

    source code 

    Initializes the writer's associated ISO image.

    This method initializes the image instance variable so that the caller can use the addImageEntry method. Once entries have been added, the writeImage method can be called with no arguments.

    Parameters:
    • newDisc (Boolean true/false.) - Indicates whether the disc should be re-initialized
    • tmpdir (String representing a directory path on disk) - Temporary directory to use if needed
    • mediaLabel (String, no more than 25 characters long) - Media label to be applied to the image, if any

    addImageEntry(self, path, graftPoint)

    source code 

    Adds a filepath entry to the writer's associated ISO image.

    The contents of the filepath -- but not the path itself -- will be added to the image at the indicated graft point. If you don't want to use a graft point, just pass None.

    Parameters:
    • path (String representing a path on disk) - File or directory to be added to the image
    • graftPoint (String representing a graft point path, as described above) - Graft point to be used when adding this entry
    Raises:
    • ValueError - If initializeImage() was not previously called

    Note: Before calling this method, you must call initializeImage.

    setImageNewDisc(self, newDisc)

    source code 

    Resets (overrides) the newDisc flag on the internal image.

    Parameters:
    • newDisc - New disc flag to set
    Raises:
    • ValueError - If initializeImage() was not previously called

    getEstimatedImageSize(self)

    source code 

    Gets the estimated size of the image associated with the writer.

    Returns:
    Estimated size of the image, in bytes.
    Raises:
    • IOError - If there is a problem calling mkisofs.
    • ValueError - If initializeImage() was not previously called

    unlockTray(self)

    source code 

    Unlocks the device's tray.

    Raises:
    • IOError - If there is an error talking to the device.

    _createImage(self)

    source code 

    Creates an ISO image based on configuration in self._image.

    Returns:
    Path to the newly-created ISO image on disk.
    Raises:
    • IOError - If there is an error writing the image to disk.
    • ValueError - If there are no filesystem entries in the image
    • ValueError - If a path cannot be encoded properly.

    _writeImage(self, imagePath, writeMulti, newDisc)

    source code 

    Write an ISO image to disc using cdrecord. The disc is blanked first if newDisc is True.

    Parameters:
    • imagePath - Path to an ISO image on disk
    • writeMulti - Indicates whether a multisession disc should be written, if possible.
    • newDisc - Indicates whether the entire disc will overwritten.

    _buildUnlockTrayArgs(device)
    Static Method

    source code 

    Builds a list of arguments to be passed to a eject command.

    The arguments will cause the eject command to unlock the tray.

    Parameters:
    • device - Filesystem device name for this writer, i.e. /dev/cdrw.
    Returns:
    List suitable for passing to util.executeCommand as args.

    Property Details [hide private]

    device

    Filesystem device name for this writer.

    Get Method:
    _getDevice(self) - Property target used to get the device value.

    scsiId

    SCSI id for the device, in the form [<method>:]scsibus,target,lun.

    Get Method:
    _getScsiId(self) - Property target used to get the SCSI id value.

    hardwareId

    Hardware id for this writer, either SCSI id or device path.

    Get Method:
    _getHardwareId(self) - Property target used to get the hardware id value.

    driveSpeed

    Speed at which the drive writes.

    Get Method:
    _getDriveSpeed(self) - Property target used to get the drive speed.

    media

    Definition of media that is expected to be in the device.

    Get Method:
    _getMedia(self) - Property target used to get the media description.

    deviceType

    Type of the device, as returned from cdrecord -prcap.

    Get Method:
    _getDeviceType(self) - Property target used to get the device type.

    deviceVendor

    Vendor of the device, as returned from cdrecord -prcap.

    Get Method:
    _getDeviceVendor(self) - Property target used to get the device vendor.

    deviceId

    Device identification, as returned from cdrecord -prcap.

    Get Method:
    _getDeviceId(self) - Property target used to get the device id.

    deviceBufferSize

    Size of the device's write buffer, in bytes.

    Get Method:
    _getDeviceBufferSize(self) - Property target used to get the device buffer size.

    deviceSupportsMulti

    Indicates whether device supports multisession discs.

    Get Method:
    _getDeviceSupportsMulti(self) - Property target used to get the device-support-multi flag.

    deviceHasTray

    Indicates whether the device has a media tray.

    Get Method:
    _getDeviceHasTray(self) - Property target used to get the device-has-tray flag.

    deviceCanEject

    Indicates whether the device supports ejecting its media.

    Get Method:
    _getDeviceCanEject(self) - Property target used to get the device-can-eject flag.

    refreshMediaDelay

    Refresh media delay, in seconds.

    Get Method:
    _getRefreshMediaDelay(self) - Property target used to get the configured refresh media delay, in seconds.

    ejectDelay

    Eject delay, in seconds.

    Get Method:
    _getEjectDelay(self) - Property target used to get the configured eject delay, in seconds.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.ActionDependencies-class.html0000664000175000017500000005713212642035643031506 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ActionDependencies
    Package CedarBackup2 :: Module config :: Class ActionDependencies
    [hide private]
    [frames] | no frames]

    Class ActionDependencies

    source code

    object --+
             |
            ActionDependencies
    

    Class representing dependencies associated with an extended action.

    Execution ordering for extended actions is done in one of two ways: either by using index values (lower index gets run first) or by having the extended action specify dependencies in terms of other named actions. This class encapsulates the dependency information for an extended action.

    The following restrictions exist on data in this class:

    • Any action name must be a non-empty string matching ACTION_NAME_REGEX
    Instance Methods [hide private]
     
    __init__(self, beforeList=None, afterList=None)
    Constructor for the ActionDependencies class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setBeforeList(self, value)
    Property target used to set the "run before" list.
    source code
     
    _getBeforeList(self)
    Property target used to get the "run before" list.
    source code
     
    _setAfterList(self, value)
    Property target used to set the "run after" list.
    source code
     
    _getAfterList(self)
    Property target used to get the "run after" list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      beforeList
    List of named actions that this action must be run before.
      afterList
    List of named actions that this action must be run after.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, beforeList=None, afterList=None)
    (Constructor)

    source code 

    Constructor for the ActionDependencies class.

    Parameters:
    • beforeList - List of named actions that this action must be run before
    • afterList - List of named actions that this action must be run after
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setBeforeList(self, value)

    source code 

    Property target used to set the "run before" list. Either the value must be None or each element must be a string matching ACTION_NAME_REGEX.

    Raises:
    • ValueError - If the value does not match the regular expression.

    _setAfterList(self, value)

    source code 

    Property target used to set the "run after" list. Either the value must be None or each element must be a string matching ACTION_NAME_REGEX.

    Raises:
    • ValueError - If the value does not match the regular expression.

    Property Details [hide private]

    beforeList

    List of named actions that this action must be run before.

    Get Method:
    _getBeforeList(self) - Property target used to get the "run before" list.
    Set Method:
    _setBeforeList(self, value) - Property target used to set the "run before" list.

    afterList

    List of named actions that this action must be run after.

    Get Method:
    _getAfterList(self) - Property target used to get the "run after" list.
    Set Method:
    _setAfterList(self, value) - Property target used to set the "run after" list.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.writers.dvdwriter-module.html0000664000175000017500000000431412642035643031163 0ustar pronovicpronovic00000000000000 dvdwriter

    Module dvdwriter


    Classes

    DvdWriter
    MediaCapacity
    MediaDefinition

    Variables

    EJECT_COMMAND
    GROWISOFS_COMMAND
    MEDIA_DVDPLUSR
    MEDIA_DVDPLUSRW
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.amazons3-module.html0000664000175000017500000010137512642035643027716 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.amazons3
    Package CedarBackup2 :: Package extend :: Module amazons3
    [hide private]
    [frames] | no frames]

    Module amazons3

    source code

    Store-type extension that writes data to Amazon S3.

    This extension requires a new configuration section <amazons3> and is intended to be run immediately after the standard stage action, replacing the standard store action. Aside from its own configuration, it requires the options and staging configuration sections in the standard Cedar Backup configuration file. Since it is intended to replace the store action, it does not rely on any store configuration.

    The underlying functionality relies on the AWS CLI interface. Before you use this extension, you need to set up your Amazon S3 account and configure the AWS CLI connection per Amazon's documentation. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to communicate with AWS. So, make sure you configure AWS CLI as the backup user and not root.

    You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and ${output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user.

    For instance, you can use something like this with GPG:

      /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
    

    The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.:

      dd if=/dev/urandom count=20 bs=1 | xxd -ps
    

    (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user.

    This extension was written for and tested on Linux. It will throw an exception if run on Windows.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      AmazonS3Config
    Class representing Amazon S3 configuration.
      LocalConfig
    Class representing this extension's configuration document.
    Functions [hide private]
     
    executeAction(configPath, options, config)
    Executes the amazons3 backup action.
    source code
     
    _findCorrectDailyDir(options, config, local)
    Finds the correct daily staging directory to be written to Amazon S3.
    source code
     
    _applySizeLimits(options, config, local, stagingDirs)
    Apply size limits, throwing an exception if any limits are exceeded.
    source code
     
    _writeToAmazonS3(config, local, stagingDirs)
    Writes the indicated staging directories to an Amazon S3 bucket.
    source code
     
    _writeStoreIndicator(config, stagingDirs)
    Writes a store indicator file into staging directories.
    source code
     
    _clearExistingBackup(config, s3BucketUrl)
    Clear any existing backup files for an S3 bucket URL.
    source code
     
    _uploadStagingDir(config, stagingDir, s3BucketUrl)
    Upload the contents of a staging directory out to the Amazon S3 cloud.
    source code
     
    _verifyUpload(config, stagingDir, s3BucketUrl)
    Verify that a staging directory was properly uploaded to the Amazon S3 cloud.
    source code
     
    _encryptStagingDir(config, local, stagingDir, encryptedDir)
    Encrypt a staging directory, creating a new directory in the process.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.extend.amazons3")
      SU_COMMAND = ['su']
      AWS_COMMAND = ['aws']
      STORE_INDICATOR = 'cback.amazons3'
      __package__ = 'CedarBackup2.extend'
    Function Details [hide private]

    executeAction(configPath, options, config)

    source code 

    Executes the amazons3 backup action.

    Parameters:
    • configPath (String representing a path on disk.) - Path to configuration file on disk.
    • options (Options object.) - Program command-line options.
    • config (Config object.) - Program configuration.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there are I/O problems reading or writing files

    _findCorrectDailyDir(options, config, local)

    source code 

    Finds the correct daily staging directory to be written to Amazon S3.

    This is substantially similar to the same function in store.py. The main difference is that it doesn't rely on store configuration at all.

    Parameters:
    • options - Options object.
    • config - Config object.
    • local - Local config object.
    Returns:
    Correct staging dir, as a dict mapping directory to date suffix.
    Raises:
    • IOError - If the staging directory cannot be found.

    _applySizeLimits(options, config, local, stagingDirs)

    source code 

    Apply size limits, throwing an exception if any limits are exceeded.

    Size limits are optional. If a limit is set to None, it does not apply. The full size limit applies if the full option is set or if today is the start of the week. The incremental size limit applies otherwise. Limits are applied to the total size of all the relevant staging directories.

    Parameters:
    • options - Options object.
    • config - Config object.
    • local - Local config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • ValueError - If a size limit has been exceeded

    _writeToAmazonS3(config, local, stagingDirs)

    source code 

    Writes the indicated staging directories to an Amazon S3 bucket.

    Each of the staging directories listed in stagingDirs will be written to the configured Amazon S3 bucket from local configuration. The directories will be placed into the image at the root by date, so staging directory /opt/stage/2005/02/10 will be placed into the S3 bucket at /2005/02/10. If an encrypt commmand is provided, the files will be encrypted first.

    Parameters:
    • config - Config object.
    • local - Local config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.
    Raises:
    • ValueError - Under many generic error conditions
    • IOError - If there is a problem writing to Amazon S3

    _writeStoreIndicator(config, stagingDirs)

    source code 

    Writes a store indicator file into staging directories.

    Parameters:
    • config - Config object.
    • stagingDirs - Dictionary mapping directory path to date suffix.

    _clearExistingBackup(config, s3BucketUrl)

    source code 

    Clear any existing backup files for an S3 bucket URL.

    Parameters:
    • config - Config object.
    • s3BucketUrl - S3 bucket URL associated with the staging directory

    _uploadStagingDir(config, stagingDir, s3BucketUrl)

    source code 

    Upload the contents of a staging directory out to the Amazon S3 cloud.

    Parameters:
    • config - Config object.
    • stagingDir - Staging directory to upload
    • s3BucketUrl - S3 bucket URL associated with the staging directory

    _verifyUpload(config, stagingDir, s3BucketUrl)

    source code 

    Verify that a staging directory was properly uploaded to the Amazon S3 cloud.

    Parameters:
    • config - Config object.
    • stagingDir - Staging directory to verify
    • s3BucketUrl - S3 bucket URL associated with the staging directory

    _encryptStagingDir(config, local, stagingDir, encryptedDir)

    source code 

    Encrypt a staging directory, creating a new directory in the process.

    Parameters:
    • config - Config object.
    • stagingDir - Staging directory to use as source
    • encryptedDir - Target directory into which encrypted files should be written

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.postgresql.PostgresqlConfig-class.html0000664000175000017500000007453012642035644033501 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.postgresql.PostgresqlConfig
    Package CedarBackup2 :: Package extend :: Module postgresql :: Class PostgresqlConfig
    [hide private]
    [frames] | no frames]

    Class PostgresqlConfig

    source code

    object --+
             |
            PostgresqlConfig
    

    Class representing PostgreSQL configuration.

    The PostgreSQL configuration information is used for backing up PostgreSQL databases.

    The following restrictions exist on data in this class:

    • The compress mode must be one of the values in VALID_COMPRESS_MODES.
    • The 'all' flag must be 'Y' if no databases are defined.
    • The 'all' flag must be 'N' if any databases are defined.
    • Any values in the databases list must be strings.
    Instance Methods [hide private]
     
    __init__(self, user=None, compressMode=None, all=None, databases=None)
    Constructor for the PostgresqlConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setUser(self, value)
    Property target used to set the user value.
    source code
     
    _getUser(self)
    Property target used to get the user value.
    source code
     
    _setCompressMode(self, value)
    Property target used to set the compress mode.
    source code
     
    _getCompressMode(self)
    Property target used to get the compress mode.
    source code
     
    _setAll(self, value)
    Property target used to set the 'all' flag.
    source code
     
    _getAll(self)
    Property target used to get the 'all' flag.
    source code
     
    _setDatabases(self, value)
    Property target used to set the databases list.
    source code
     
    _getDatabases(self)
    Property target used to get the databases list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      user
    User to execute backup as.
      all
    Indicates whether to back up all databases.
      databases
    List of databases to back up.
      compressMode
    Compress mode to be used for backed-up files.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, user=None, compressMode=None, all=None, databases=None)
    (Constructor)

    source code 

    Constructor for the PostgresqlConfig class.

    Parameters:
    • user - User to execute backup as.
    • compressMode - Compress mode for backed-up files.
    • all - Indicates whether to back up all databases.
    • databases - List of databases to back up.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setCompressMode(self, value)

    source code 

    Property target used to set the compress mode. If not None, the mode must be one of the values in VALID_COMPRESS_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setAll(self, value)

    source code 

    Property target used to set the 'all' flag. No validations, but we normalize the value to True or False.

    _setDatabases(self, value)

    source code 

    Property target used to set the databases list. Either the value must be None or each element must be a string.

    Raises:
    • ValueError - If the value is not a string.

    Property Details [hide private]

    user

    User to execute backup as.

    Get Method:
    _getUser(self) - Property target used to get the user value.
    Set Method:
    _setUser(self, value) - Property target used to set the user value.

    all

    Indicates whether to back up all databases.

    Get Method:
    _getAll(self) - Property target used to get the 'all' flag.
    Set Method:
    _setAll(self, value) - Property target used to set the 'all' flag.

    databases

    List of databases to back up.

    Get Method:
    _getDatabases(self) - Property target used to get the databases list.
    Set Method:
    _setDatabases(self, value) - Property target used to set the databases list.

    compressMode

    Compress mode to be used for backed-up files.

    Get Method:
    _getCompressMode(self) - Property target used to get the compress mode.
    Set Method:
    _setCompressMode(self, value) - Property target used to set the compress mode.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.PathResolverSingleton-class.html0000664000175000017500000004551612642035644031777 0ustar pronovicpronovic00000000000000 CedarBackup2.util.PathResolverSingleton
    Package CedarBackup2 :: Module util :: Class PathResolverSingleton
    [hide private]
    [frames] | no frames]

    Class PathResolverSingleton

    source code

    object --+
             |
            PathResolverSingleton
    

    Singleton used for resolving executable paths.

    Various functions throughout Cedar Backup (including extensions) need a way to resolve the path of executables that they use. For instance, the image functionality needs to find the mkisofs executable, and the Subversion extension needs to find the svnlook executable. Cedar Backup's original behavior was to assume that the simple name ("svnlook" or whatever) was available on the caller's $PATH, and to fail otherwise. However, this turns out to be less than ideal, since for instance the root user might not always have executables like svnlook in its path.

    One solution is to specify a path (either via an absolute path or some sort of path insertion or path appending mechanism) that would apply to the executeCommand() function. This is not difficult to implement, but it seem like kind of a "big hammer" solution. Besides that, it might also represent a security flaw (for instance, I prefer not to mess with root's $PATH on the application level if I don't have to).

    The alternative is to set up some sort of configuration for the path to certain executables, i.e. "find svnlook in /usr/local/bin/svnlook" or whatever. This PathResolverSingleton aims to provide a good solution to the mapping problem. Callers of all sorts (extensions or not) can get an instance of the singleton. Then, they call the lookup method to try and resolve the executable they are looking for. Through the lookup method, the caller can also specify a default to use if a mapping is not found. This way, with no real effort on the part of the caller, behavior can neatly degrade to something equivalent to the current behavior if there is no special mapping or if the singleton was never initialized in the first place.

    Even better, extensions automagically get access to the same resolver functionality, and they don't even need to understand how the mapping happens. All extension authors need to do is document what executables their code requires, and the standard resolver configuration section will meet their needs.

    The class should be initialized once through the constructor somewhere in the main routine. Then, the main routine should call the fill method to fill in the resolver's internal structures. Everyone else who needs to resolve a path will get an instance of the class using getInstance and will then just call the lookup method.

    Nested Classes [hide private]
      _Helper
    Helper class to provide a singleton factory method.
    Instance Methods [hide private]
     
    __init__(self)
    Singleton constructor, which just creates the singleton instance.
    source code
     
    lookup(self, name, default=None)
    Looks up name and returns the resolved path associated with the name.
    source code
     
    fill(self, mapping)
    Fills in the singleton's internal mapping from name to resource.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]
      _instance = None
    Holds a reference to the singleton
      getInstance = _Helper()
    Instance Variables [hide private]
      _mapping
    Internal mapping from resource name to path.
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self)
    (Constructor)

    source code 

    Singleton constructor, which just creates the singleton instance.

    Overrides: object.__init__

    lookup(self, name, default=None)

    source code 

    Looks up name and returns the resolved path associated with the name.

    Parameters:
    • name - Name of the path resource to resolve.
    • default - Default to return if resource cannot be resolved.
    Returns:
    Resolved path associated with name, or default if name can't be resolved.

    fill(self, mapping)

    source code 

    Fills in the singleton's internal mapping from name to resource.

    Parameters:
    • mapping (Dictionary mapping name to path, both as strings.) - Mapping from resource name to path.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.xmlutil.Serializer-class.html0000664000175000017500000006535212642035644030330 0ustar pronovicpronovic00000000000000 CedarBackup2.xmlutil.Serializer
    Package CedarBackup2 :: Module xmlutil :: Class Serializer
    [hide private]
    [frames] | no frames]

    Class Serializer

    source code

    object --+
             |
            Serializer
    

    XML serializer class.

    This is a customized serializer that I hacked together based on what I found in the PyXML distribution. Basically, around release 2.7.0, the only reason I still had around a dependency on PyXML was for the PrettyPrint functionality, and that seemed pointless. So, I stripped the PrettyPrint code out of PyXML and hacked bits of it off until it did just what I needed and no more.

    This code started out being called PrintVisitor, but I decided it makes more sense just calling it a serializer. I've made nearly all of the methods private, and I've added a new high-level serialize() method rather than having clients call visit().

    Anyway, as a consequence of my hacking with it, this can't quite be called a complete XML serializer any more. I ripped out support for HTML and XHTML, and there is also no longer any support for namespaces (which I took out because this dragged along a lot of extra code, and Cedar Backup doesn't use namespaces). However, everything else should pretty much work as expected.


    Copyright: This code, prior to customization, was part of the PyXML codebase, and before that was part of the 4DOM suite developed by Fourthought, Inc. It its original form, it was Copyright (c) 2000 Fourthought Inc, USA; All Rights Reserved.

    Instance Methods [hide private]
     
    __init__(self, stream=sys.stdout, encoding='UTF-8', indent=3)
    Initialize a serializer.
    source code
     
    serialize(self, xmlDom)
    Serialize the passed-in XML document.
    source code
     
    _write(self, text) source code
     
    _tryIndent(self) source code
     
    _visit(self, node) source code
     
    _visitNodeList(self, node, exclude=None) source code
     
    _visitNamedNodeMap(self, node) source code
     
    _visitAttr(self, node) source code
     
    _visitProlog(self) source code
     
    _visitDocument(self, node) source code
     
    _visitDocumentFragment(self, node) source code
     
    _visitElement(self, node) source code
     
    _visitText(self, node) source code
     
    _visitDocumentType(self, doctype) source code
     
    _visitEntity(self, node)
    Visited from a NamedNodeMap in DocumentType
    source code
     
    _visitNotation(self, node)
    Visited from a NamedNodeMap in DocumentType
    source code
     
    _visitCDATASection(self, node) source code
     
    _visitComment(self, node) source code
     
    _visitEntityReference(self, node) source code
     
    _visitProcessingInstruction(self, node) source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, stream=sys.stdout, encoding='UTF-8', indent=3)
    (Constructor)

    source code 

    Initialize a serializer.

    Parameters:
    • stream - Stream to write output to.
    • encoding - Output encoding.
    • indent - Number of spaces to indent, as an integer
    Overrides: object.__init__

    serialize(self, xmlDom)

    source code 

    Serialize the passed-in XML document.

    Parameters:
    • xmlDom - XML DOM tree to serialize
    Raises:
    • ValueError - If there's an unknown node type in the document.

    _visit(self, node)

    source code 
    Raises:
    • ValueError - If there's an unknown node type in the document.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions.rebuild-module.html0000664000175000017500000000276412642035643030547 0ustar pronovicpronovic00000000000000 rebuild

    Module rebuild


    Functions

    executeRebuild

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.tools-module.html0000664000175000017500000000215412642035643026613 0ustar pronovicpronovic00000000000000 tools

    Module tools


    Variables


    [hide private] CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions.initialize-module.html0000664000175000017500000000255512642035643031260 0ustar pronovicpronovic00000000000000 initialize

    Module initialize


    Functions

    executeInitialize

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.postgresql-pysrc.html0000664000175000017500000065612712642035647030257 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.postgresql
    Package CedarBackup2 :: Package extend :: Module postgresql
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.extend.postgresql

      1  # -*- coding: iso-8859-1 -*- 
      2  # vim: set ft=python ts=3 sw=3 expandtab: 
      3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
      4  # 
      5  #              C E D A R 
      6  #          S O L U T I O N S       "Software done right." 
      7  #           S O F T W A R E 
      8  # 
      9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     10  # 
     11  # Copyright (c) 2006,2010 Kenneth J. Pronovici. 
     12  # Copyright (c) 2006 Antoine Beaupre. 
     13  # All rights reserved. 
     14  # 
     15  # This program is free software; you can redistribute it and/or 
     16  # modify it under the terms of the GNU General Public License, 
     17  # Version 2, as published by the Free Software Foundation. 
     18  # 
     19  # This program is distributed in the hope that it will be useful, 
     20  # but WITHOUT ANY WARRANTY; without even the implied warranty of 
     21  # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. 
     22  # 
     23  # Copies of the GNU General Public License are available from 
     24  # the Free Software Foundation website, http://www.gnu.org/. 
     25  # 
     26  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     27  # 
     28  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
     29  #            Antoine Beaupre <anarcat@koumbit.org> 
     30  # Language : Python 2 (>= 2.7) 
     31  # Project  : Official Cedar Backup Extensions 
     32  # Purpose  : Provides an extension to back up PostgreSQL databases. 
     33  # 
     34  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     35  # This file was created with a width of 132 characters, and NO tabs. 
     36   
     37  ######################################################################## 
     38  # Module documentation 
     39  ######################################################################## 
     40   
     41  """ 
     42  Provides an extension to back up PostgreSQL databases. 
     43   
     44  This is a Cedar Backup extension used to back up PostgreSQL databases via the 
     45  Cedar Backup command line.  It requires a new configurations section 
     46  <postgresql> and is intended to be run either immediately before or immediately 
     47  after the standard collect action.  Aside from its own configuration, it 
     48  requires the options and collect configuration sections in the standard Cedar 
     49  Backup configuration file. 
     50   
     51  The backup is done via the C{pg_dump} or C{pg_dumpall} commands included with 
     52  the PostgreSQL product.  Output can be compressed using C{gzip} or C{bzip2}. 
     53  Administrators can configure the extension either to back up all databases or 
     54  to back up only specific databases.  The extension assumes that the current 
     55  user has passwordless access to the database since there is no easy way to pass 
     56  a password to the C{pg_dump} client. This can be accomplished using appropriate 
     57  voodoo in the C{pg_hda.conf} file. 
     58   
     59  Note that this code always produces a full backup.  There is currently no 
     60  facility for making incremental backups. 
     61   
     62  You should always make C{/etc/cback.conf} unreadble to non-root users once you 
     63  place postgresql configuration into it, since postgresql configuration will 
     64  contain information about available PostgreSQL databases and usernames. 
     65   
     66  Use of this extension I{may} expose usernames in the process listing (via 
     67  C{ps}) when the backup is running if the username is specified in the 
     68  configuration. 
     69   
     70  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
     71  @author: Antoine Beaupre <anarcat@koumbit.org> 
     72  """ 
     73   
     74  ######################################################################## 
     75  # Imported modules 
     76  ######################################################################## 
     77   
     78  # System modules 
     79  import os 
     80  import logging 
     81  from gzip import GzipFile 
     82  from bz2 import BZ2File 
     83   
     84  # Cedar Backup modules 
     85  from CedarBackup2.xmlutil import createInputDom, addContainerNode, addStringNode, addBooleanNode 
     86  from CedarBackup2.xmlutil import readFirstChild, readString, readStringList, readBoolean 
     87  from CedarBackup2.config import VALID_COMPRESS_MODES 
     88  from CedarBackup2.util import resolveCommand, executeCommand 
     89  from CedarBackup2.util import ObjectTypeList, changeOwnership 
     90   
     91   
     92  ######################################################################## 
     93  # Module-wide constants and variables 
     94  ######################################################################## 
     95   
     96  logger = logging.getLogger("CedarBackup2.log.extend.postgresql") 
     97  POSTGRESQLDUMP_COMMAND = [ "pg_dump", ] 
     98  POSTGRESQLDUMPALL_COMMAND = [ "pg_dumpall", ] 
    
    99 100 101 ######################################################################## 102 # PostgresqlConfig class definition 103 ######################################################################## 104 105 -class PostgresqlConfig(object):
    106 107 """ 108 Class representing PostgreSQL configuration. 109 110 The PostgreSQL configuration information is used for backing up PostgreSQL databases. 111 112 The following restrictions exist on data in this class: 113 114 - The compress mode must be one of the values in L{VALID_COMPRESS_MODES}. 115 - The 'all' flag must be 'Y' if no databases are defined. 116 - The 'all' flag must be 'N' if any databases are defined. 117 - Any values in the databases list must be strings. 118 119 @sort: __init__, __repr__, __str__, __cmp__, user, all, databases 120 """ 121
    122 - def __init__(self, user=None, compressMode=None, all=None, databases=None): # pylint: disable=W0622
    123 """ 124 Constructor for the C{PostgresqlConfig} class. 125 126 @param user: User to execute backup as. 127 @param compressMode: Compress mode for backed-up files. 128 @param all: Indicates whether to back up all databases. 129 @param databases: List of databases to back up. 130 """ 131 self._user = None 132 self._compressMode = None 133 self._all = None 134 self._databases = None 135 self.user = user 136 self.compressMode = compressMode 137 self.all = all 138 self.databases = databases
    139
    140 - def __repr__(self):
    141 """ 142 Official string representation for class instance. 143 """ 144 return "PostgresqlConfig(%s, %s, %s)" % (self.user, self.all, self.databases)
    145
    146 - def __str__(self):
    147 """ 148 Informal string representation for class instance. 149 """ 150 return self.__repr__()
    151
    152 - def __cmp__(self, other):
    153 """ 154 Definition of equals operator for this class. 155 @param other: Other object to compare to. 156 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 157 """ 158 if other is None: 159 return 1 160 if self.user != other.user: 161 if self.user < other.user: 162 return -1 163 else: 164 return 1 165 if self.compressMode != other.compressMode: 166 if self.compressMode < other.compressMode: 167 return -1 168 else: 169 return 1 170 if self.all != other.all: 171 if self.all < other.all: 172 return -1 173 else: 174 return 1 175 if self.databases != other.databases: 176 if self.databases < other.databases: 177 return -1 178 else: 179 return 1 180 return 0
    181
    182 - def _setUser(self, value):
    183 """ 184 Property target used to set the user value. 185 """ 186 if value is not None: 187 if len(value) < 1: 188 raise ValueError("User must be non-empty string.") 189 self._user = value
    190
    191 - def _getUser(self):
    192 """ 193 Property target used to get the user value. 194 """ 195 return self._user
    196
    197 - def _setCompressMode(self, value):
    198 """ 199 Property target used to set the compress mode. 200 If not C{None}, the mode must be one of the values in L{VALID_COMPRESS_MODES}. 201 @raise ValueError: If the value is not valid. 202 """ 203 if value is not None: 204 if value not in VALID_COMPRESS_MODES: 205 raise ValueError("Compress mode must be one of %s." % VALID_COMPRESS_MODES) 206 self._compressMode = value
    207
    208 - def _getCompressMode(self):
    209 """ 210 Property target used to get the compress mode. 211 """ 212 return self._compressMode
    213
    214 - def _setAll(self, value):
    215 """ 216 Property target used to set the 'all' flag. 217 No validations, but we normalize the value to C{True} or C{False}. 218 """ 219 if value: 220 self._all = True 221 else: 222 self._all = False
    223
    224 - def _getAll(self):
    225 """ 226 Property target used to get the 'all' flag. 227 """ 228 return self._all
    229
    230 - def _setDatabases(self, value):
    231 """ 232 Property target used to set the databases list. 233 Either the value must be C{None} or each element must be a string. 234 @raise ValueError: If the value is not a string. 235 """ 236 if value is None: 237 self._databases = None 238 else: 239 for database in value: 240 if len(database) < 1: 241 raise ValueError("Each database must be a non-empty string.") 242 try: 243 saved = self._databases 244 self._databases = ObjectTypeList(basestring, "string") 245 self._databases.extend(value) 246 except Exception, e: 247 self._databases = saved 248 raise e
    249
    250 - def _getDatabases(self):
    251 """ 252 Property target used to get the databases list. 253 """ 254 return self._databases
    255 256 user = property(_getUser, _setUser, None, "User to execute backup as.") 257 compressMode = property(_getCompressMode, _setCompressMode, None, "Compress mode to be used for backed-up files.") 258 all = property(_getAll, _setAll, None, "Indicates whether to back up all databases.") 259 databases = property(_getDatabases, _setDatabases, None, "List of databases to back up.") 260
    261 262 ######################################################################## 263 # LocalConfig class definition 264 ######################################################################## 265 266 -class LocalConfig(object):
    267 268 """ 269 Class representing this extension's configuration document. 270 271 This is not a general-purpose configuration object like the main Cedar 272 Backup configuration object. Instead, it just knows how to parse and emit 273 PostgreSQL-specific configuration values. Third parties who need to read and 274 write configuration related to this extension should access it through the 275 constructor, C{validate} and C{addConfig} methods. 276 277 @note: Lists within this class are "unordered" for equality comparisons. 278 279 @sort: __init__, __repr__, __str__, __cmp__, postgresql, validate, addConfig 280 """ 281
    282 - def __init__(self, xmlData=None, xmlPath=None, validate=True):
    283 """ 284 Initializes a configuration object. 285 286 If you initialize the object without passing either C{xmlData} or 287 C{xmlPath} then configuration will be empty and will be invalid until it 288 is filled in properly. 289 290 No reference to the original XML data or original path is saved off by 291 this class. Once the data has been parsed (successfully or not) this 292 original information is discarded. 293 294 Unless the C{validate} argument is C{False}, the L{LocalConfig.validate} 295 method will be called (with its default arguments) against configuration 296 after successfully parsing any passed-in XML. Keep in mind that even if 297 C{validate} is C{False}, it might not be possible to parse the passed-in 298 XML document if lower-level validations fail. 299 300 @note: It is strongly suggested that the C{validate} option always be set 301 to C{True} (the default) unless there is a specific need to read in 302 invalid configuration from disk. 303 304 @param xmlData: XML data representing configuration. 305 @type xmlData: String data. 306 307 @param xmlPath: Path to an XML file on disk. 308 @type xmlPath: Absolute path to a file on disk. 309 310 @param validate: Validate the document after parsing it. 311 @type validate: Boolean true/false. 312 313 @raise ValueError: If both C{xmlData} and C{xmlPath} are passed-in. 314 @raise ValueError: If the XML data in C{xmlData} or C{xmlPath} cannot be parsed. 315 @raise ValueError: If the parsed configuration document is not valid. 316 """ 317 self._postgresql = None 318 self.postgresql = None 319 if xmlData is not None and xmlPath is not None: 320 raise ValueError("Use either xmlData or xmlPath, but not both.") 321 if xmlData is not None: 322 self._parseXmlData(xmlData) 323 if validate: 324 self.validate() 325 elif xmlPath is not None: 326 xmlData = open(xmlPath).read() 327 self._parseXmlData(xmlData) 328 if validate: 329 self.validate()
    330
    331 - def __repr__(self):
    332 """ 333 Official string representation for class instance. 334 """ 335 return "LocalConfig(%s)" % (self.postgresql)
    336
    337 - def __str__(self):
    338 """ 339 Informal string representation for class instance. 340 """ 341 return self.__repr__()
    342
    343 - def __cmp__(self, other):
    344 """ 345 Definition of equals operator for this class. 346 Lists within this class are "unordered" for equality comparisons. 347 @param other: Other object to compare to. 348 @return: -1/0/1 depending on whether self is C{<}, C{=} or C{>} other. 349 """ 350 if other is None: 351 return 1 352 if self.postgresql != other.postgresql: 353 if self.postgresql < other.postgresql: 354 return -1 355 else: 356 return 1 357 return 0
    358
    359 - def _setPostgresql(self, value):
    360 """ 361 Property target used to set the postgresql configuration value. 362 If not C{None}, the value must be a C{PostgresqlConfig} object. 363 @raise ValueError: If the value is not a C{PostgresqlConfig} 364 """ 365 if value is None: 366 self._postgresql = None 367 else: 368 if not isinstance(value, PostgresqlConfig): 369 raise ValueError("Value must be a C{PostgresqlConfig} object.") 370 self._postgresql = value
    371
    372 - def _getPostgresql(self):
    373 """ 374 Property target used to get the postgresql configuration value. 375 """ 376 return self._postgresql
    377 378 postgresql = property(_getPostgresql, _setPostgresql, None, "Postgresql configuration in terms of a C{PostgresqlConfig} object.") 379
    380 - def validate(self):
    381 """ 382 Validates configuration represented by the object. 383 384 The compress mode must be filled in. Then, if the 'all' flag 385 I{is} set, no databases are allowed, and if the 'all' flag is 386 I{not} set, at least one database is required. 387 388 @raise ValueError: If one of the validations fails. 389 """ 390 if self.postgresql is None: 391 raise ValueError("PostgreSQL section is required.") 392 if self.postgresql.compressMode is None: 393 raise ValueError("Compress mode value is required.") 394 if self.postgresql.all: 395 if self.postgresql.databases is not None and self.postgresql.databases != []: 396 raise ValueError("Databases cannot be specified if 'all' flag is set.") 397 else: 398 if self.postgresql.databases is None or len(self.postgresql.databases) < 1: 399 raise ValueError("At least one PostgreSQL database must be indicated if 'all' flag is not set.")
    400
    401 - def addConfig(self, xmlDom, parentNode):
    402 """ 403 Adds a <postgresql> configuration section as the next child of a parent. 404 405 Third parties should use this function to write configuration related to 406 this extension. 407 408 We add the following fields to the document:: 409 410 user //cb_config/postgresql/user 411 compressMode //cb_config/postgresql/compress_mode 412 all //cb_config/postgresql/all 413 414 We also add groups of the following items, one list element per 415 item:: 416 417 database //cb_config/postgresql/database 418 419 @param xmlDom: DOM tree as from C{impl.createDocument()}. 420 @param parentNode: Parent that the section should be appended to. 421 """ 422 if self.postgresql is not None: 423 sectionNode = addContainerNode(xmlDom, parentNode, "postgresql") 424 addStringNode(xmlDom, sectionNode, "user", self.postgresql.user) 425 addStringNode(xmlDom, sectionNode, "compress_mode", self.postgresql.compressMode) 426 addBooleanNode(xmlDom, sectionNode, "all", self.postgresql.all) 427 if self.postgresql.databases is not None: 428 for database in self.postgresql.databases: 429 addStringNode(xmlDom, sectionNode, "database", database)
    430
    431 - def _parseXmlData(self, xmlData):
    432 """ 433 Internal method to parse an XML string into the object. 434 435 This method parses the XML document into a DOM tree (C{xmlDom}) and then 436 calls a static method to parse the postgresql configuration section. 437 438 @param xmlData: XML data to be parsed 439 @type xmlData: String data 440 441 @raise ValueError: If the XML cannot be successfully parsed. 442 """ 443 (xmlDom, parentNode) = createInputDom(xmlData) 444 self._postgresql = LocalConfig._parsePostgresql(parentNode)
    445 446 @staticmethod
    447 - def _parsePostgresql(parent):
    448 """ 449 Parses a postgresql configuration section. 450 451 We read the following fields:: 452 453 user //cb_config/postgresql/user 454 compressMode //cb_config/postgresql/compress_mode 455 all //cb_config/postgresql/all 456 457 We also read groups of the following item, one list element per 458 item:: 459 460 databases //cb_config/postgresql/database 461 462 @param parent: Parent node to search beneath. 463 464 @return: C{PostgresqlConfig} object or C{None} if the section does not exist. 465 @raise ValueError: If some filled-in value is invalid. 466 """ 467 postgresql = None 468 section = readFirstChild(parent, "postgresql") 469 if section is not None: 470 postgresql = PostgresqlConfig() 471 postgresql.user = readString(section, "user") 472 postgresql.compressMode = readString(section, "compress_mode") 473 postgresql.all = readBoolean(section, "all") 474 postgresql.databases = readStringList(section, "database") 475 return postgresql
    476
    477 478 ######################################################################## 479 # Public functions 480 ######################################################################## 481 482 ########################### 483 # executeAction() function 484 ########################### 485 486 -def executeAction(configPath, options, config):
    487 """ 488 Executes the PostgreSQL backup action. 489 490 @param configPath: Path to configuration file on disk. 491 @type configPath: String representing a path on disk. 492 493 @param options: Program command-line options. 494 @type options: Options object. 495 496 @param config: Program configuration. 497 @type config: Config object. 498 499 @raise ValueError: Under many generic error conditions 500 @raise IOError: If a backup could not be written for some reason. 501 """ 502 logger.debug("Executing PostgreSQL extended action.") 503 if config.options is None or config.collect is None: 504 raise ValueError("Cedar Backup configuration is not properly filled in.") 505 local = LocalConfig(xmlPath=configPath) 506 if local.postgresql.all: 507 logger.info("Backing up all databases.") 508 _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, 509 config.options.backupUser, config.options.backupGroup, None) 510 if local.postgresql.databases is not None and local.postgresql.databases != []: 511 logger.debug("Backing up %d individual databases.", len(local.postgresql.databases)) 512 for database in local.postgresql.databases: 513 logger.info("Backing up database [%s].", database) 514 _backupDatabase(config.collect.targetDir, local.postgresql.compressMode, local.postgresql.user, 515 config.options.backupUser, config.options.backupGroup, database) 516 logger.info("Executed the PostgreSQL extended action successfully.")
    517
    518 -def _backupDatabase(targetDir, compressMode, user, backupUser, backupGroup, database=None):
    519 """ 520 Backs up an individual PostgreSQL database, or all databases. 521 522 This internal method wraps the public method and adds some functionality, 523 like figuring out a filename, etc. 524 525 @param targetDir: Directory into which backups should be written. 526 @param compressMode: Compress mode to be used for backed-up files. 527 @param user: User to use for connecting to the database. 528 @param backupUser: User to own resulting file. 529 @param backupGroup: Group to own resulting file. 530 @param database: Name of database, or C{None} for all databases. 531 532 @return: Name of the generated backup file. 533 534 @raise ValueError: If some value is missing or invalid. 535 @raise IOError: If there is a problem executing the PostgreSQL dump. 536 """ 537 (outputFile, filename) = _getOutputFile(targetDir, database, compressMode) 538 try: 539 backupDatabase(user, outputFile, database) 540 finally: 541 outputFile.close() 542 if not os.path.exists(filename): 543 raise IOError("Dump file [%s] does not seem to exist after backup completed." % filename) 544 changeOwnership(filename, backupUser, backupGroup)
    545
    546 # pylint: disable=R0204 547 -def _getOutputFile(targetDir, database, compressMode):
    548 """ 549 Opens the output file used for saving the PostgreSQL dump. 550 551 The filename is either C{"postgresqldump.txt"} or 552 C{"postgresqldump-<database>.txt"}. The C{".gz"} or C{".bz2"} extension is 553 added if C{compress} is C{True}. 554 555 @param targetDir: Target directory to write file in. 556 @param database: Name of the database (if any) 557 @param compressMode: Compress mode to be used for backed-up files. 558 559 @return: Tuple of (Output file object, filename) 560 """ 561 if database is None: 562 filename = os.path.join(targetDir, "postgresqldump.txt") 563 else: 564 filename = os.path.join(targetDir, "postgresqldump-%s.txt" % database) 565 if compressMode == "gzip": 566 filename = "%s.gz" % filename 567 outputFile = GzipFile(filename, "w") 568 elif compressMode == "bzip2": 569 filename = "%s.bz2" % filename 570 outputFile = BZ2File(filename, "w") 571 else: 572 outputFile = open(filename, "w") 573 logger.debug("PostgreSQL dump file will be [%s].", filename) 574 return (outputFile, filename)
    575
    576 577 ############################ 578 # backupDatabase() function 579 ############################ 580 581 -def backupDatabase(user, backupFile, database=None):
    582 """ 583 Backs up an individual PostgreSQL database, or all databases. 584 585 This function backs up either a named local PostgreSQL database or all local 586 PostgreSQL databases, using the passed in user for connectivity. 587 This is I{always} a full backup. There is no facility for incremental 588 backups. 589 590 The backup data will be written into the passed-in back file. Normally, 591 this would be an object as returned from C{open()}, but it is possible to 592 use something like a C{GzipFile} to write compressed output. The caller is 593 responsible for closing the passed-in backup file. 594 595 @note: Typically, you would use the C{root} user to back up all databases. 596 597 @param user: User to use for connecting to the database. 598 @type user: String representing PostgreSQL username. 599 600 @param backupFile: File use for writing backup. 601 @type backupFile: Python file object as from C{open()} or C{file()}. 602 603 @param database: Name of the database to be backed up. 604 @type database: String representing database name, or C{None} for all databases. 605 606 @raise ValueError: If some value is missing or invalid. 607 @raise IOError: If there is a problem executing the PostgreSQL dump. 608 """ 609 args = [] 610 if user is not None: 611 args.append('-U') 612 args.append(user) 613 614 if database is None: 615 command = resolveCommand(POSTGRESQLDUMPALL_COMMAND) 616 else: 617 command = resolveCommand(POSTGRESQLDUMP_COMMAND) 618 args.append(database) 619 620 result = executeCommand(command, args, returnOutput=False, ignoreStderr=True, doNotLog=True, outputFile=backupFile)[0] 621 if result != 0: 622 if database is None: 623 raise IOError("Error [%d] executing PostgreSQL database dump for all databases." % result) 624 else: 625 raise IOError("Error [%d] executing PostgreSQL database dump for database [%s]." % (result, database))
    626

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.actions.store-module.html0000664000175000017500000000414012642035643030243 0ustar pronovicpronovic00000000000000 store

    Module store


    Functions

    consistencyCheck
    executeStore
    writeImage
    writeImageBlankSafe
    writeStoreIndicator

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.cli-module.html0000664000175000017500000001273712642035643026232 0ustar pronovicpronovic00000000000000 cli

    Module cli


    Classes

    Options

    Functions

    cli
    setupLogging
    setupPathResolver

    Variables

    COLLECT_INDEX
    COMBINE_ACTIONS
    DATE_FORMAT
    DEFAULT_CONFIG
    DEFAULT_LOGFILE
    DEFAULT_MODE
    DEFAULT_OWNERSHIP
    DISK_LOG_FORMAT
    DISK_OUTPUT_FORMAT
    INITIALIZE_INDEX
    LONG_SWITCHES
    NONCOMBINE_ACTIONS
    PURGE_INDEX
    REBUILD_INDEX
    SCREEN_LOG_FORMAT
    SCREEN_LOG_STREAM
    SHORT_SWITCHES
    STAGE_INDEX
    STORE_INDEX
    VALIDATE_INDEX
    VALID_ACTIONS
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.extend.amazons3.AmazonS3Config-class.html0000664000175000017500000010507712642035644032322 0ustar pronovicpronovic00000000000000 CedarBackup2.extend.amazons3.AmazonS3Config
    Package CedarBackup2 :: Package extend :: Module amazons3 :: Class AmazonS3Config
    [hide private]
    [frames] | no frames]

    Class AmazonS3Config

    source code

    object --+
             |
            AmazonS3Config
    

    Class representing Amazon S3 configuration.

    Amazon S3 configuration is used for storing backup data in Amazon's S3 cloud storage using the s3cmd tool.

    The following restrictions exist on data in this class:

    • The s3Bucket value must be a non-empty string
    • The encryptCommand value, if set, must be a non-empty string
    • The full backup size limit, if set, must be a ByteQuantity >= 0
    • The incremental backup size limit, if set, must be a ByteQuantity >= 0
    Instance Methods [hide private]
     
    __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None, fullBackupSizeLimit=None, incrementalBackupSizeLimit=None)
    Constructor for the AmazonS3Config class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setWarnMidnite(self, value)
    Property target used to set the midnite warning flag.
    source code
     
    _getWarnMidnite(self)
    Property target used to get the midnite warning flag.
    source code
     
    _setS3Bucket(self, value)
    Property target used to set the S3 bucket.
    source code
     
    _getS3Bucket(self)
    Property target used to get the S3 bucket.
    source code
     
    _setEncryptCommand(self, value)
    Property target used to set the encrypt command.
    source code
     
    _getEncryptCommand(self)
    Property target used to get the encrypt command.
    source code
     
    _setFullBackupSizeLimit(self, value)
    Property target used to set the full backup size limit.
    source code
     
    _getFullBackupSizeLimit(self)
    Property target used to get the full backup size limit.
    source code
     
    _setIncrementalBackupSizeLimit(self, value)
    Property target used to set the incremental backup size limit.
    source code
     
    _getIncrementalBackupSizeLimit(self)
    Property target used to get the incremental backup size limit.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      warnMidnite
    Whether to generate warnings for crossing midnite.
      s3Bucket
    Amazon S3 Bucket in which to store data
      encryptCommand
    Command used to encrypt data before upload to S3
      fullBackupSizeLimit
    Maximum size of a full backup, as a ByteQuantity
      incrementalBackupSizeLimit
    Maximum size of an incremental backup, as a ByteQuantity

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, warnMidnite=None, s3Bucket=None, encryptCommand=None, fullBackupSizeLimit=None, incrementalBackupSizeLimit=None)
    (Constructor)

    source code 

    Constructor for the AmazonS3Config class.

    Parameters:
    • warnMidnite - Whether to generate warnings for crossing midnite.
    • s3Bucket - Name of the Amazon S3 bucket in which to store the data
    • encryptCommand - Command used to encrypt backup data before upload to S3
    • fullBackupSizeLimit - Maximum size of a full backup, a ByteQuantity
    • incrementalBackupSizeLimit - Maximum size of an incremental backup, a ByteQuantity
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setWarnMidnite(self, value)

    source code 

    Property target used to set the midnite warning flag. No validations, but we normalize the value to True or False.

    _setFullBackupSizeLimit(self, value)

    source code 

    Property target used to set the full backup size limit. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    _setIncrementalBackupSizeLimit(self, value)

    source code 

    Property target used to set the incremental backup size limit. The value must be an integer >= 0.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    warnMidnite

    Whether to generate warnings for crossing midnite.

    Get Method:
    _getWarnMidnite(self) - Property target used to get the midnite warning flag.
    Set Method:
    _setWarnMidnite(self, value) - Property target used to set the midnite warning flag.

    s3Bucket

    Amazon S3 Bucket in which to store data

    Get Method:
    _getS3Bucket(self) - Property target used to get the S3 bucket.
    Set Method:
    _setS3Bucket(self, value) - Property target used to set the S3 bucket.

    encryptCommand

    Command used to encrypt data before upload to S3

    Get Method:
    _getEncryptCommand(self) - Property target used to get the encrypt command.
    Set Method:
    _setEncryptCommand(self, value) - Property target used to set the encrypt command.

    fullBackupSizeLimit

    Maximum size of a full backup, as a ByteQuantity

    Get Method:
    _getFullBackupSizeLimit(self) - Property target used to get the full backup size limit.
    Set Method:
    _setFullBackupSizeLimit(self, value) - Property target used to set the full backup size limit.

    incrementalBackupSizeLimit

    Maximum size of an incremental backup, as a ByteQuantity

    Get Method:
    _getIncrementalBackupSizeLimit(self) - Property target used to get the incremental backup size limit.
    Set Method:
    _setIncrementalBackupSizeLimit(self, value) - Property target used to set the incremental backup size limit.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.constants-pysrc.html0000664000175000017500000002726712642035644030233 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.constants
    Package CedarBackup2 :: Package actions :: Module constants
    [hide private]
    [frames] | no frames]

    Source Code for Module CedarBackup2.actions.constants

     1  # -*- coding: iso-8859-1 -*- 
     2  # vim: set ft=python ts=3 sw=3 expandtab: 
     3  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
     4  # 
     5  #              C E D A R 
     6  #          S O L U T I O N S       "Software done right." 
     7  #           S O F T W A R E 
     8  # 
     9  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    10  # 
    11  # Author   : Kenneth J. Pronovici <pronovic@ieee.org> 
    12  # Language : Python 2 (>= 2.7) 
    13  # Project  : Cedar Backup, release 2 
    14  # Purpose  : Provides common constants used by standard actions. 
    15  # 
    16  # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 
    17   
    18  ######################################################################## 
    19  # Module documentation 
    20  ######################################################################## 
    21   
    22  """ 
    23  Provides common constants used by standard actions. 
    24  @sort: DIR_TIME_FORMAT, DIGEST_EXTENSION, INDICATOR_PATTERN, 
    25         COLLECT_INDICATOR, STAGE_INDICATOR, STORE_INDICATOR 
    26  @author: Kenneth J. Pronovici <pronovic@ieee.org> 
    27  """ 
    28   
    29  ######################################################################## 
    30  # Module-wide constants and variables 
    31  ######################################################################## 
    32   
    33  DIR_TIME_FORMAT      = "%Y/%m/%d" 
    34  DIGEST_EXTENSION     = "sha" 
    35   
    36  INDICATOR_PATTERN    = [ r"cback\..*", ] 
    37  COLLECT_INDICATOR    = "cback.collect" 
    38  STAGE_INDICATOR      = "cback.stage" 
    39  STORE_INDICATOR      = "cback.store" 
    40   
    

    CedarBackup2-2.26.5/doc/interface/crarr.png0000664000175000017500000000052412642035643022110 0ustar pronovicpronovic00000000000000PNG  IHDR eE,tEXtCreation TimeTue 22 Aug 2006 00:43:10 -0500` XtIME)} pHYsnu>gAMA aEPLTEðf4sW ЊrD`@bCܖX{`,lNo@xdE螊dƴ~TwvtRNS@fMIDATxc`@0&+(;; /EXؑ? n  b;'+Y#(r<"IENDB`CedarBackup2-2.26.5/doc/interface/CedarBackup2.actions.constants-module.html0000664000175000017500000001756212642035643030354 0ustar pronovicpronovic00000000000000 CedarBackup2.actions.constants
    Package CedarBackup2 :: Package actions :: Module constants
    [hide private]
    [frames] | no frames]

    Module constants

    source code

    Provides common constants used by standard actions.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Variables [hide private]
      DIR_TIME_FORMAT = '%Y/%m/%d'
      DIGEST_EXTENSION = 'sha'
      INDICATOR_PATTERN = ['cback\\..*']
      COLLECT_INDICATOR = 'cback.collect'
      STAGE_INDICATOR = 'cback.stage'
      STORE_INDICATOR = 'cback.store'
      __package__ = None
    hash(x)
    CedarBackup2-2.26.5/doc/interface/CedarBackup2.peer.LocalPeer-class.html0000664000175000017500000012504112642035644027312 0ustar pronovicpronovic00000000000000 CedarBackup2.peer.LocalPeer
    Package CedarBackup2 :: Module peer :: Class LocalPeer
    [hide private]
    [frames] | no frames]

    Class LocalPeer

    source code

    object --+
             |
            LocalPeer
    

    Backup peer representing a local peer in a backup pool.

    This is a class representing a local (non-network) peer in a backup pool. Local peers are backed up by simple filesystem copy operations. A local peer has associated with it a name (typically, but not necessarily, a hostname) and a collect directory.

    The public methods other than the constructor are part of a "backup peer" interface shared with the RemotePeer class.

    Instance Methods [hide private]
     
    __init__(self, name, collectDir, ignoreFailureMode=None)
    Initializes a local backup peer.
    source code
     
    stagePeer(self, targetDir, ownership=None, permissions=None)
    Stages data from the peer into the indicated local target directory.
    source code
     
    checkCollectIndicator(self, collectIndicator=None)
    Checks the collect indicator in the peer's staging directory.
    source code
     
    writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None)
    Writes the stage indicator in the peer's staging directory.
    source code
     
    _setName(self, value)
    Property target used to set the peer name.
    source code
     
    _getName(self)
    Property target used to get the peer name.
    source code
     
    _setCollectDir(self, value)
    Property target used to set the collect directory.
    source code
     
    _getCollectDir(self)
    Property target used to get the collect directory.
    source code
     
    _setIgnoreFailureMode(self, value)
    Property target used to set the ignoreFailure mode.
    source code
     
    _getIgnoreFailureMode(self)
    Property target used to get the ignoreFailure mode.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Static Methods [hide private]
     
    _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None)
    Copies files from the source directory to the target directory.
    source code
     
    _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True)
    Copies a source file to a target file.
    source code
    Properties [hide private]
      name
    Name of the peer.
      collectDir
    Path to the peer's collect directory (an absolute local path).
      ignoreFailureMode
    Ignore failure mode for peer.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, name, collectDir, ignoreFailureMode=None)
    (Constructor)

    source code 

    Initializes a local backup peer.

    Note that the collect directory must be an absolute path, but does not have to exist when the object is instantiated. We do a lazy validation on this value since we could (potentially) be creating peer objects before an ongoing backup completed.

    Parameters:
    • name (String, typically a hostname) - Name of the backup peer
    • collectDir (String representing an absolute local path on disk) - Path to the peer's collect directory
    • ignoreFailureMode (One of VALID_FAILURE_MODES) - Ignore failure mode for this peer
    Raises:
    • ValueError - If the name is empty.
    • ValueError - If collect directory is not an absolute path.
    Overrides: object.__init__

    stagePeer(self, targetDir, ownership=None, permissions=None)

    source code 

    Stages data from the peer into the indicated local target directory.

    The collect and target directories must both already exist before this method is called. If passed in, ownership and permissions will be applied to the files that are copied.

    Parameters:
    • targetDir (String representing a directory on disk) - Target directory to write data into
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the staged files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If collect directory is not a directory or does not exist
    • ValueError - If target directory is not a directory, does not exist or is not absolute.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there were no files to stage (i.e. the directory was empty)
    • IOError - If there is an IO error copying a file.
    • OSError - If there is an OS error copying or changing permissions on a file
    Notes:
    • The caller is responsible for checking that the indicator exists, if they care. This function only stages the files within the directory.
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    checkCollectIndicator(self, collectIndicator=None)

    source code 

    Checks the collect indicator in the peer's staging directory.

    When a peer has completed collecting its backup files, it will write an empty indicator file into its collect directory. This method checks to see whether that indicator has been written. We're "stupid" here - if the collect directory doesn't exist, you'll naturally get back False.

    If you need to, you can override the name of the collect indicator file by passing in a different name.

    Parameters:
    • collectIndicator (String representing name of a file in the collect directory) - Name of the collect indicator file to check
    Returns:
    Boolean true/false depending on whether the indicator exists.
    Raises:
    • ValueError - If a path cannot be encoded properly.

    writeStageIndicator(self, stageIndicator=None, ownership=None, permissions=None)

    source code 

    Writes the stage indicator in the peer's staging directory.

    When the master has completed collecting its backup files, it will write an empty indicator file into the peer's collect directory. The presence of this file implies that the staging process is complete.

    If you need to, you can override the name of the stage indicator file by passing in a different name.

    Parameters:
    • stageIndicator (String representing name of a file in the collect directory) - Name of the indicator file to write
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the indicator file should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the indicator file should have
    Raises:
    • ValueError - If collect directory is not a directory or does not exist
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there is an IO error creating the file.
    • OSError - If there is an OS error creating or changing permissions on the file

    Note: If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    _copyLocalDir(sourceDir, targetDir, ownership=None, permissions=None)
    Static Method

    source code 

    Copies files from the source directory to the target directory.

    This function is not recursive. Only the files in the directory will be copied. Ownership and permissions will be left at their default values if new values are not specified. The source and target directories are allowed to be soft links to a directory, but besides that soft links are ignored.

    Parameters:
    • sourceDir (String representing a directory on disk) - Source directory
    • targetDir (String representing a directory on disk) - Target directory
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied files should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    Returns:
    Number of files copied from the source directory to the target directory.
    Raises:
    • ValueError - If source or target is not a directory or does not exist.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If there is an IO error copying the files.
    • OSError - If there is an OS error copying or changing permissions on a files

    Note: If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.

    _copyLocalFile(sourceFile=None, targetFile=None, ownership=None, permissions=None, overwrite=True)
    Static Method

    source code 

    Copies a source file to a target file.

    If the source file is None then the target file will be created or overwritten as an empty file. If the target file is None, this method is a no-op. Attempting to copy a soft link or a directory will result in an exception.

    Parameters:
    • sourceFile (String representing a file on disk, as an absolute path) - Source file to copy
    • targetFile (String representing a file on disk, as an absolute path) - Target file to create
    • ownership (Tuple of numeric ids (uid, gid)) - Owner and group that the copied should have
    • permissions (UNIX permissions mode, specified in octal (i.e. 0640).) - Permissions that the staged files should have
    • overwrite (Boolean true/false.) - Indicates whether it's OK to overwrite the target file.
    Raises:
    • ValueError - If the passed-in source file is not a regular file.
    • ValueError - If a path cannot be encoded properly.
    • IOError - If the target file already exists.
    • IOError - If there is an IO error copying the file
    • OSError - If there is an OS error copying or changing permissions on a file
    Notes:
    • If you have user/group as strings, call the util.getUidGid function to get the associated uid/gid as an ownership tuple.
    • We will not overwrite a target file that exists when this method is invoked. If the target already exists, we'll raise an exception.

    _setName(self, value)

    source code 

    Property target used to set the peer name. The value must be a non-empty string and cannot be None.

    Raises:
    • ValueError - If the value is an empty string or None.

    _setCollectDir(self, value)

    source code 

    Property target used to set the collect directory. The value must be an absolute path and cannot be None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is None or is not an absolute path.
    • ValueError - If a path cannot be encoded properly.

    _setIgnoreFailureMode(self, value)

    source code 

    Property target used to set the ignoreFailure mode. If not None, the mode must be one of the values in VALID_FAILURE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    Property Details [hide private]

    name

    Name of the peer.

    Get Method:
    _getName(self) - Property target used to get the peer name.
    Set Method:
    _setName(self, value) - Property target used to set the peer name.

    collectDir

    Path to the peer's collect directory (an absolute local path).

    Get Method:
    _getCollectDir(self) - Property target used to get the collect directory.
    Set Method:
    _setCollectDir(self, value) - Property target used to set the collect directory.

    ignoreFailureMode

    Ignore failure mode for peer.

    Get Method:
    _getIgnoreFailureMode(self) - Property target used to get the ignoreFailure mode.
    Set Method:
    _setIgnoreFailureMode(self, value) - Property target used to set the ignoreFailure mode.

    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.tools.amazons3-module.html0000664000175000017500000000573312642035643030353 0ustar pronovicpronovic00000000000000 amazons3

    Module amazons3


    Classes

    Options

    Functions

    cli

    Variables

    AWS_COMMAND
    LONG_SWITCHES
    SHORT_SWITCHES
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.peer-module.html0000664000175000017500000002436212642035643025630 0ustar pronovicpronovic00000000000000 CedarBackup2.peer
    Package CedarBackup2 :: Module peer
    [hide private]
    [frames] | no frames]

    Module peer

    source code

    Provides backup peer-related objects and utility functions.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      LocalPeer
    Backup peer representing a local peer in a backup pool.
      RemotePeer
    Backup peer representing a remote peer in a backup pool.
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.peer")
      DEF_RCP_COMMAND = ['/usr/bin/scp', '-B', '-q', '-C']
      DEF_RSH_COMMAND = ['/usr/bin/ssh']
      DEF_CBACK_COMMAND = '/usr/bin/cback'
      DEF_COLLECT_INDICATOR = 'cback.collect'
    Name of the default collect indicator file.
      DEF_STAGE_INDICATOR = 'cback.stage'
    Name of the default stage indicator file.
      SU_COMMAND = ['su']
      __package__ = 'CedarBackup2'
    CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.tools.span-module.html0000664000175000017500000000733112642035643027555 0ustar pronovicpronovic00000000000000 span

    Module span


    Classes

    SpanOptions

    Functions

    cli

    Variables

    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/CedarBackup2.cli._ActionItem-class.html0000664000175000017500000005074412642035643027461 0ustar pronovicpronovic00000000000000 CedarBackup2.cli._ActionItem
    Package CedarBackup2 :: Module cli :: Class _ActionItem
    [hide private]
    [frames] | no frames]

    Class _ActionItem

    source code

    object --+
             |
            _ActionItem
    

    Class representing a single action to be executed.

    This class represents a single named action to be executed, and understands how to execute that action.

    The built-in actions will use only the options and config values. We also pass in the config path so that extension modules can re-parse configuration if they want to, to add in extra information.

    This class is also where pre-action and post-action hooks are executed. An action item is instantiated in terms of optional pre- and post-action hook objects (config.ActionHook), which are then executed at the appropriate time (if set).


    Note: The comparison operators for this class have been implemented to only compare based on the index and SORT_ORDER value, and ignore all other values. This is so that the action set list can be easily sorted first by type (_ActionItem before _ManagedActionItem) and then by index within type.

    Instance Methods [hide private]
     
    __init__(self, index, name, preHooks, postHooks, function)
    Default constructor.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    executeAction(self, configPath, options, config)
    Executes the action associated with an item, including hooks.
    source code
     
    _executeAction(self, configPath, options, config)
    Executes the action, specifically the function associated with the action.
    source code
     
    _executeHook(self, type, hook)
    Executes a hook command via util.executeCommand().
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]
      SORT_ORDER = 0
    Defines a sort order to order properly between types.
    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, index, name, preHooks, postHooks, function)
    (Constructor)

    source code 

    Default constructor.

    It's OK to pass None for index, preHooks or postHooks, but not for name.

    Parameters:
    • index - Index of the item (or None).
    • name - Name of the action that is being executed.
    • preHooks - List of pre-action hooks in terms of an ActionHook object, or None.
    • postHooks - List of post-action hooks in terms of an ActionHook object, or None.
    • function - Reference to function associated with item.
    Overrides: object.__init__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. The only thing we compare is the item's index.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    executeAction(self, configPath, options, config)

    source code 

    Executes the action associated with an item, including hooks.

    See class notes for more details on how the action is executed.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action.
    • config - Parsed configuration to be passed to action.
    Raises:
    • Exception - If there is a problem executing the action.

    _executeAction(self, configPath, options, config)

    source code 

    Executes the action, specifically the function associated with the action.

    Parameters:
    • configPath - Path to configuration file on disk.
    • options - Command-line options to be passed to action.
    • config - Parsed configuration to be passed to action.

    _executeHook(self, type, hook)

    source code 

    Executes a hook command via util.executeCommand().

    Parameters:
    • type - String describing the type of hook, for logging.
    • hook - Hook, in terms of a ActionHook object.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.dvdwriter.MediaCapacity-class.html0000664000175000017500000004451412642035644032723 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.dvdwriter.MediaCapacity
    Package CedarBackup2 :: Package writers :: Module dvdwriter :: Class MediaCapacity
    [hide private]
    [frames] | no frames]

    Class MediaCapacity

    source code

    object --+
             |
            MediaCapacity
    

    Class encapsulating information about DVD media capacity.

    Space used and space available do not include any information about media lead-in or other overhead.

    Instance Methods [hide private]
     
    __init__(self, bytesUsed, bytesAvailable)
    Initializes a capacity object.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    _getBytesUsed(self)
    Property target used to get the bytes-used value.
    source code
     
    _getBytesAvailable(self)
    Property target available to get the bytes-available value.
    source code
     
    _getTotalCapacity(self)
    Property target to get the total capacity (used + available).
    source code
     
    _getUtilized(self)
    Property target to get the percent of capacity which is utilized.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      bytesUsed
    Space used on disc, in bytes.
      bytesAvailable
    Space available on disc, in bytes.
      totalCapacity
    Total capacity of the disc, in bytes.
      utilized
    Percentage of the total capacity which is utilized.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, bytesUsed, bytesAvailable)
    (Constructor)

    source code 

    Initializes a capacity object.

    Raises:
    • ValueError - If the bytes used and available values are not floats.
    Overrides: object.__init__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    Property Details [hide private]

    bytesUsed

    Space used on disc, in bytes.

    Get Method:
    _getBytesUsed(self) - Property target used to get the bytes-used value.

    bytesAvailable

    Space available on disc, in bytes.

    Get Method:
    _getBytesAvailable(self) - Property target available to get the bytes-available value.

    totalCapacity

    Total capacity of the disc, in bytes.

    Get Method:
    _getTotalCapacity(self) - Property target to get the total capacity (used + available).

    utilized

    Percentage of the total capacity which is utilized.

    Get Method:
    _getUtilized(self) - Property target to get the percent of capacity which is utilized.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.PreActionHook-class.html0000664000175000017500000003247012642035644030466 0ustar pronovicpronovic00000000000000 CedarBackup2.config.PreActionHook
    Package CedarBackup2 :: Module config :: Class PreActionHook
    [hide private]
    [frames] | no frames]

    Class PreActionHook

    source code

    object --+    
             |    
    ActionHook --+
                 |
                PreActionHook
    

    Class representing a pre-action hook associated with an action.

    A hook associated with an action is a shell command to be executed either before or after a named action is executed. In this case, a pre-action hook is executed before the named action.

    The following restrictions exist on data in this class:

    • The action name must be a non-empty string consisting of lower-case letters and digits.
    • The shell command must be a non-empty string.

    The internal before instance variable is always set to True in this class.

    Instance Methods [hide private]
     
    __init__(self, action=None, command=None)
    Constructor for the PreActionHook class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code

    Inherited from ActionHook: __str__, __cmp__

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]

    Inherited from ActionHook: action, command, before, after

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, action=None, command=None)
    (Constructor)

    source code 

    Constructor for the PreActionHook class.

    Parameters:
    • action - Action this hook is associated with
    • command - Shell command to execute
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    CedarBackup2-2.26.5/doc/interface/epydoc.js0000664000175000017500000002452512642035643022121 0ustar pronovicpronovic00000000000000function toggle_private() { // Search for any private/public links on this page. Store // their old text in "cmd," so we will know what action to // take; and change their text to the opposite action. var cmd = "?"; var elts = document.getElementsByTagName("a"); for(var i=0; i...
    "; elt.innerHTML = s; } } function toggle(id) { elt = document.getElementById(id+"-toggle"); if (elt.innerHTML == "-") collapse(id); else expand(id); return false; } function highlight(id) { var elt = document.getElementById(id+"-def"); if (elt) elt.className = "py-highlight-hdr"; var elt = document.getElementById(id+"-expanded"); if (elt) elt.className = "py-highlight"; var elt = document.getElementById(id+"-collapsed"); if (elt) elt.className = "py-highlight"; } function num_lines(s) { var n = 1; var pos = s.indexOf("\n"); while ( pos > 0) { n += 1; pos = s.indexOf("\n", pos+1); } return n; } // Collapse all blocks that mave more than `min_lines` lines. function collapse_all(min_lines) { var elts = document.getElementsByTagName("div"); for (var i=0; i 0) if (elt.id.substring(split, elt.id.length) == "-expanded") if (num_lines(elt.innerHTML) > min_lines) collapse(elt.id.substring(0, split)); } } function expandto(href) { var start = href.indexOf("#")+1; if (start != 0 && start != href.length) { if (href.substring(start, href.length) != "-") { collapse_all(4); pos = href.indexOf(".", start); while (pos != -1) { var id = href.substring(start, pos); expand(id); pos = href.indexOf(".", pos+1); } var id = href.substring(start, href.length); expand(id); highlight(id); } } } function kill_doclink(id) { var parent = document.getElementById(id); parent.removeChild(parent.childNodes.item(0)); } function auto_kill_doclink(ev) { if (!ev) var ev = window.event; if (!this.contains(ev.toElement)) { var parent = document.getElementById(this.parentID); parent.removeChild(parent.childNodes.item(0)); } } function doclink(id, name, targets_id) { var elt = document.getElementById(id); // If we already opened the box, then destroy it. // (This case should never occur, but leave it in just in case.) if (elt.childNodes.length > 1) { elt.removeChild(elt.childNodes.item(0)); } else { // The outer box: relative + inline positioning. var box1 = document.createElement("div"); box1.style.position = "relative"; box1.style.display = "inline"; box1.style.top = 0; box1.style.left = 0; // A shadow for fun var shadow = document.createElement("div"); shadow.style.position = "absolute"; shadow.style.left = "-1.3em"; shadow.style.top = "-1.3em"; shadow.style.background = "#404040"; // The inner box: absolute positioning. var box2 = document.createElement("div"); box2.style.position = "relative"; box2.style.border = "1px solid #a0a0a0"; box2.style.left = "-.2em"; box2.style.top = "-.2em"; box2.style.background = "white"; box2.style.padding = ".3em .4em .3em .4em"; box2.style.fontStyle = "normal"; box2.onmouseout=auto_kill_doclink; box2.parentID = id; // Get the targets var targets_elt = document.getElementById(targets_id); var targets = targets_elt.getAttribute("targets"); var links = ""; target_list = targets.split(","); for (var i=0; i" + target[0] + ""; } // Put it all together. elt.insertBefore(box1, elt.childNodes.item(0)); //box1.appendChild(box2); box1.appendChild(shadow); shadow.appendChild(box2); box2.innerHTML = "Which "+name+" do you want to see documentation for?" + ""; } return false; } function get_anchor() { var href = location.href; var start = href.indexOf("#")+1; if ((start != 0) && (start != href.length)) return href.substring(start, href.length); } function redirect_url(dottedName) { // Scan through each element of the "pages" list, and check // if "name" matches with any of them. for (var i=0; i-m" or "-c"; // extract the portion & compare it to dottedName. var pagename = pages[i].substring(0, pages[i].length-2); if (pagename == dottedName.substring(0,pagename.length)) { // We've found a page that matches `dottedName`; // construct its URL, using leftover `dottedName` // content to form an anchor. var pagetype = pages[i].charAt(pages[i].length-1); var url = pagename + ((pagetype=="m")?"-module.html": "-class.html"); if (dottedName.length > pagename.length) url += "#" + dottedName.substring(pagename.length+1, dottedName.length); return url; } } } CedarBackup2-2.26.5/doc/interface/toc-CedarBackup2.writers.cdwriter-module.html0000664000175000017500000000503012642035643030770 0ustar pronovicpronovic00000000000000 cdwriter

    Module cdwriter


    Classes

    CdWriter
    MediaCapacity
    MediaDefinition

    Variables

    CDRECORD_COMMAND
    EJECT_COMMAND
    MEDIA_CDRW_74
    MEDIA_CDRW_80
    MEDIA_CDR_74
    MEDIA_CDR_80
    MKISOFS_COMMAND
    __package__
    logger

    [hide private] CedarBackup2-2.26.5/doc/interface/api-objects.txt0000664000175000017500000065207612642035647023255 0ustar pronovicpronovic00000000000000CedarBackup2 CedarBackup2-module.html CedarBackup2.__package__ CedarBackup2-module.html#__package__ CedarBackup2.action CedarBackup2.action-module.html CedarBackup2.action.executePurge CedarBackup2.actions.purge-module.html#executePurge CedarBackup2.action.executeRebuild CedarBackup2.actions.rebuild-module.html#executeRebuild CedarBackup2.action.executeStage CedarBackup2.actions.stage-module.html#executeStage CedarBackup2.action.__package__ CedarBackup2.action-module.html#__package__ CedarBackup2.action.executeStore CedarBackup2.actions.store-module.html#executeStore CedarBackup2.action.executeCollect CedarBackup2.actions.collect-module.html#executeCollect CedarBackup2.action.executeValidate CedarBackup2.actions.validate-module.html#executeValidate CedarBackup2.actions CedarBackup2.actions-module.html CedarBackup2.actions.__package__ CedarBackup2.actions-module.html#__package__ CedarBackup2.actions.collect CedarBackup2.actions.collect-module.html CedarBackup2.actions.collect._getTarfilePath CedarBackup2.actions.collect-module.html#_getTarfilePath CedarBackup2.actions.collect._getCollectMode CedarBackup2.actions.collect-module.html#_getCollectMode CedarBackup2.actions.collect._getArchiveMode CedarBackup2.actions.collect-module.html#_getArchiveMode CedarBackup2.actions.collect._writeDigest CedarBackup2.actions.collect-module.html#_writeDigest CedarBackup2.actions.collect.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.actions.collect.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.actions.collect.__package__ CedarBackup2.actions.collect-module.html#__package__ CedarBackup2.actions.collect._executeBackup CedarBackup2.actions.collect-module.html#_executeBackup CedarBackup2.actions.collect._loadDigest CedarBackup2.actions.collect-module.html#_loadDigest CedarBackup2.actions.collect._collectFile CedarBackup2.actions.collect-module.html#_collectFile CedarBackup2.actions.collect.logger CedarBackup2.actions.collect-module.html#logger CedarBackup2.actions.collect.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.actions.collect._getDereference CedarBackup2.actions.collect-module.html#_getDereference CedarBackup2.actions.collect.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.actions.collect._getLinkDepth CedarBackup2.actions.collect-module.html#_getLinkDepth CedarBackup2.actions.collect._getRecursionLevel CedarBackup2.actions.collect-module.html#_getRecursionLevel CedarBackup2.actions.collect.executeCollect CedarBackup2.actions.collect-module.html#executeCollect CedarBackup2.actions.collect._getIgnoreFile CedarBackup2.actions.collect-module.html#_getIgnoreFile CedarBackup2.actions.collect._getExclusions CedarBackup2.actions.collect-module.html#_getExclusions CedarBackup2.actions.collect._collectDirectory CedarBackup2.actions.collect-module.html#_collectDirectory CedarBackup2.actions.collect._getDigestPath CedarBackup2.actions.collect-module.html#_getDigestPath CedarBackup2.actions.collect.buildNormalizedPath CedarBackup2.util-module.html#buildNormalizedPath CedarBackup2.actions.constants CedarBackup2.actions.constants-module.html CedarBackup2.actions.constants.INDICATOR_PATTERN CedarBackup2.actions.constants-module.html#INDICATOR_PATTERN CedarBackup2.actions.constants.STAGE_INDICATOR CedarBackup2.actions.constants-module.html#STAGE_INDICATOR CedarBackup2.actions.constants.STORE_INDICATOR CedarBackup2.actions.constants-module.html#STORE_INDICATOR CedarBackup2.actions.constants.DIR_TIME_FORMAT CedarBackup2.actions.constants-module.html#DIR_TIME_FORMAT CedarBackup2.actions.constants.__package__ CedarBackup2.actions.constants-module.html#__package__ CedarBackup2.actions.constants.COLLECT_INDICATOR CedarBackup2.actions.constants-module.html#COLLECT_INDICATOR CedarBackup2.actions.constants.DIGEST_EXTENSION CedarBackup2.actions.constants-module.html#DIGEST_EXTENSION CedarBackup2.actions.initialize CedarBackup2.actions.initialize-module.html CedarBackup2.actions.initialize.logger CedarBackup2.actions.initialize-module.html#logger CedarBackup2.actions.initialize.initializeMediaState CedarBackup2.actions.util-module.html#initializeMediaState CedarBackup2.actions.initialize.executeInitialize CedarBackup2.actions.initialize-module.html#executeInitialize CedarBackup2.actions.initialize.__package__ CedarBackup2.actions.initialize-module.html#__package__ CedarBackup2.actions.purge CedarBackup2.actions.purge-module.html CedarBackup2.actions.purge.executePurge CedarBackup2.actions.purge-module.html#executePurge CedarBackup2.actions.purge.logger CedarBackup2.actions.purge-module.html#logger CedarBackup2.actions.purge.__package__ CedarBackup2.actions.purge-module.html#__package__ CedarBackup2.actions.rebuild CedarBackup2.actions.rebuild-module.html CedarBackup2.actions.rebuild.writeStoreIndicator CedarBackup2.actions.store-module.html#writeStoreIndicator CedarBackup2.actions.rebuild.executeRebuild CedarBackup2.actions.rebuild-module.html#executeRebuild CedarBackup2.actions.rebuild.writeImage CedarBackup2.actions.store-module.html#writeImage CedarBackup2.actions.rebuild.__package__ CedarBackup2.actions.rebuild-module.html#__package__ CedarBackup2.actions.rebuild.checkMediaState CedarBackup2.actions.util-module.html#checkMediaState CedarBackup2.actions.rebuild._findRebuildDirs CedarBackup2.actions.rebuild-module.html#_findRebuildDirs CedarBackup2.actions.rebuild.deriveDayOfWeek CedarBackup2.util-module.html#deriveDayOfWeek CedarBackup2.actions.rebuild.consistencyCheck CedarBackup2.actions.store-module.html#consistencyCheck CedarBackup2.actions.rebuild.logger CedarBackup2.actions.rebuild-module.html#logger CedarBackup2.actions.stage CedarBackup2.actions.stage-module.html CedarBackup2.actions.stage._getRcpCommand CedarBackup2.actions.stage-module.html#_getRcpCommand CedarBackup2.actions.stage._getLocalUser CedarBackup2.actions.stage-module.html#_getLocalUser CedarBackup2.actions.stage._getRemotePeers CedarBackup2.actions.stage-module.html#_getRemotePeers CedarBackup2.actions.stage.getUidGid CedarBackup2.util-module.html#getUidGid CedarBackup2.actions.stage._createStagingDirs CedarBackup2.actions.stage-module.html#_createStagingDirs CedarBackup2.actions.stage.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.actions.stage.executeStage CedarBackup2.actions.stage-module.html#executeStage CedarBackup2.actions.stage.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.actions.stage.__package__ CedarBackup2.actions.stage-module.html#__package__ CedarBackup2.actions.stage.logger CedarBackup2.actions.stage-module.html#logger CedarBackup2.actions.stage._getLocalPeers CedarBackup2.actions.stage-module.html#_getLocalPeers CedarBackup2.actions.stage.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.actions.stage._getDailyDir CedarBackup2.actions.stage-module.html#_getDailyDir CedarBackup2.actions.stage.isRunningAsRoot CedarBackup2.util-module.html#isRunningAsRoot CedarBackup2.actions.stage._getIgnoreFailuresFlag CedarBackup2.actions.stage-module.html#_getIgnoreFailuresFlag CedarBackup2.actions.stage._getRemoteUser CedarBackup2.actions.stage-module.html#_getRemoteUser CedarBackup2.actions.store CedarBackup2.actions.store-module.html CedarBackup2.actions.store.writeImage CedarBackup2.actions.store-module.html#writeImage CedarBackup2.actions.store.executeStore CedarBackup2.actions.store-module.html#executeStore CedarBackup2.actions.store._getNewDisc CedarBackup2.actions.store-module.html#_getNewDisc CedarBackup2.actions.store.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.actions.store.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.actions.store.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.actions.store.unmount CedarBackup2.util-module.html#unmount CedarBackup2.actions.store.__package__ CedarBackup2.actions.store-module.html#__package__ CedarBackup2.actions.store.writeStoreIndicator CedarBackup2.actions.store-module.html#writeStoreIndicator CedarBackup2.actions.store.logger CedarBackup2.actions.store-module.html#logger CedarBackup2.actions.store.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.actions.store.checkMediaState CedarBackup2.actions.util-module.html#checkMediaState CedarBackup2.actions.store._findCorrectDailyDir CedarBackup2.actions.store-module.html#_findCorrectDailyDir CedarBackup2.actions.store.writeImageBlankSafe CedarBackup2.actions.store-module.html#writeImageBlankSafe CedarBackup2.actions.store.buildMediaLabel CedarBackup2.actions.util-module.html#buildMediaLabel CedarBackup2.actions.store.compareContents CedarBackup2.filesystem-module.html#compareContents CedarBackup2.actions.store.consistencyCheck CedarBackup2.actions.store-module.html#consistencyCheck CedarBackup2.actions.store.mount CedarBackup2.util-module.html#mount CedarBackup2.actions.util CedarBackup2.actions.util-module.html CedarBackup2.actions.util.findDailyDirs CedarBackup2.actions.util-module.html#findDailyDirs CedarBackup2.actions.util.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.actions.util.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.actions.util.__package__ CedarBackup2.actions.util-module.html#__package__ CedarBackup2.actions.util.readMediaLabel CedarBackup2.writers.util-module.html#readMediaLabel CedarBackup2.actions.util.logger CedarBackup2.actions.util-module.html#logger CedarBackup2.actions.util._getMediaType CedarBackup2.actions.util-module.html#_getMediaType CedarBackup2.actions.util._getDeviceType CedarBackup2.actions.util-module.html#_getDeviceType CedarBackup2.actions.util.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.actions.util.getBackupFiles CedarBackup2.actions.util-module.html#getBackupFiles CedarBackup2.actions.util.MEDIA_LABEL_PREFIX CedarBackup2.actions.util-module.html#MEDIA_LABEL_PREFIX CedarBackup2.actions.util.deviceMounted CedarBackup2.util-module.html#deviceMounted CedarBackup2.actions.util.checkMediaState CedarBackup2.actions.util-module.html#checkMediaState CedarBackup2.actions.util.buildMediaLabel CedarBackup2.actions.util-module.html#buildMediaLabel CedarBackup2.actions.util.initializeMediaState CedarBackup2.actions.util-module.html#initializeMediaState CedarBackup2.actions.validate CedarBackup2.actions.validate-module.html CedarBackup2.actions.validate._checkDir CedarBackup2.actions.validate-module.html#_checkDir CedarBackup2.actions.validate._validatePurge CedarBackup2.actions.validate-module.html#_validatePurge CedarBackup2.actions.validate._validateReference CedarBackup2.actions.validate-module.html#_validateReference CedarBackup2.actions.validate._validateStage CedarBackup2.actions.validate-module.html#_validateStage CedarBackup2.actions.validate._validateOptions CedarBackup2.actions.validate-module.html#_validateOptions CedarBackup2.actions.validate.__package__ CedarBackup2.actions.validate-module.html#__package__ CedarBackup2.actions.validate.getUidGid CedarBackup2.util-module.html#getUidGid CedarBackup2.actions.validate._validateExtensions CedarBackup2.actions.validate-module.html#_validateExtensions CedarBackup2.actions.validate._validateCollect CedarBackup2.actions.validate-module.html#_validateCollect CedarBackup2.actions.validate.getFunctionReference CedarBackup2.util-module.html#getFunctionReference CedarBackup2.actions.validate.executeValidate CedarBackup2.actions.validate-module.html#executeValidate CedarBackup2.actions.validate._validateStore CedarBackup2.actions.validate-module.html#_validateStore CedarBackup2.actions.validate.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.actions.validate.logger CedarBackup2.actions.validate-module.html#logger CedarBackup2.cli CedarBackup2.cli-module.html CedarBackup2.cli.SHORT_SWITCHES CedarBackup2.cli-module.html#SHORT_SWITCHES CedarBackup2.cli.executeRebuild CedarBackup2.actions.rebuild-module.html#executeRebuild CedarBackup2.cli.LONG_SWITCHES CedarBackup2.cli-module.html#LONG_SWITCHES CedarBackup2.cli.DISK_LOG_FORMAT CedarBackup2.cli-module.html#DISK_LOG_FORMAT CedarBackup2.cli.DEFAULT_LOGFILE CedarBackup2.cli-module.html#DEFAULT_LOGFILE CedarBackup2.cli.DEFAULT_MODE CedarBackup2.cli-module.html#DEFAULT_MODE CedarBackup2.cli.executeStore CedarBackup2.actions.store-module.html#executeStore CedarBackup2.cli._usage CedarBackup2.cli-module.html#_usage CedarBackup2.cli.getFunctionReference CedarBackup2.util-module.html#getFunctionReference CedarBackup2.cli._setupDiskOutputLogging CedarBackup2.cli-module.html#_setupDiskOutputLogging CedarBackup2.cli.cli CedarBackup2.cli-module.html#cli CedarBackup2.cli.customizeOverrides CedarBackup2.customize-module.html#customizeOverrides CedarBackup2.cli.sortDict CedarBackup2.util-module.html#sortDict CedarBackup2.cli.__package__ CedarBackup2.cli-module.html#__package__ CedarBackup2.cli.DISK_OUTPUT_FORMAT CedarBackup2.cli-module.html#DISK_OUTPUT_FORMAT CedarBackup2.cli.executeValidate CedarBackup2.actions.validate-module.html#executeValidate CedarBackup2.cli.VALIDATE_INDEX CedarBackup2.cli-module.html#VALIDATE_INDEX CedarBackup2.cli.executeInitialize CedarBackup2.actions.initialize-module.html#executeInitialize CedarBackup2.cli.getUidGid CedarBackup2.util-module.html#getUidGid CedarBackup2.cli._setupScreenFlowLogging CedarBackup2.cli-module.html#_setupScreenFlowLogging CedarBackup2.cli.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.cli.executeCollect CedarBackup2.actions.collect-module.html#executeCollect CedarBackup2.cli.logger CedarBackup2.cli-module.html#logger CedarBackup2.cli.splitCommandLine CedarBackup2.util-module.html#splitCommandLine CedarBackup2.cli.NONCOMBINE_ACTIONS CedarBackup2.cli-module.html#NONCOMBINE_ACTIONS CedarBackup2.cli._setupLogfile CedarBackup2.cli-module.html#_setupLogfile CedarBackup2.cli.STAGE_INDEX CedarBackup2.cli-module.html#STAGE_INDEX CedarBackup2.cli._setupOutputLogging CedarBackup2.cli-module.html#_setupOutputLogging CedarBackup2.cli.executePurge CedarBackup2.actions.purge-module.html#executePurge CedarBackup2.cli.STORE_INDEX CedarBackup2.cli-module.html#STORE_INDEX CedarBackup2.cli.COLLECT_INDEX CedarBackup2.cli-module.html#COLLECT_INDEX CedarBackup2.cli.SCREEN_LOG_STREAM CedarBackup2.cli-module.html#SCREEN_LOG_STREAM CedarBackup2.cli.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.cli.COMBINE_ACTIONS CedarBackup2.cli-module.html#COMBINE_ACTIONS CedarBackup2.cli.DEFAULT_CONFIG CedarBackup2.cli-module.html#DEFAULT_CONFIG CedarBackup2.cli.executeStage CedarBackup2.actions.stage-module.html#executeStage CedarBackup2.cli.DEFAULT_OWNERSHIP CedarBackup2.cli-module.html#DEFAULT_OWNERSHIP CedarBackup2.cli.DATE_FORMAT CedarBackup2.cli-module.html#DATE_FORMAT CedarBackup2.cli.setupPathResolver CedarBackup2.cli-module.html#setupPathResolver CedarBackup2.cli.SCREEN_LOG_FORMAT CedarBackup2.cli-module.html#SCREEN_LOG_FORMAT CedarBackup2.cli.setupLogging CedarBackup2.cli-module.html#setupLogging CedarBackup2.cli._diagnostics CedarBackup2.cli-module.html#_diagnostics CedarBackup2.cli.INITIALIZE_INDEX CedarBackup2.cli-module.html#INITIALIZE_INDEX CedarBackup2.cli._version CedarBackup2.cli-module.html#_version CedarBackup2.cli.PURGE_INDEX CedarBackup2.cli-module.html#PURGE_INDEX CedarBackup2.cli.REBUILD_INDEX CedarBackup2.cli-module.html#REBUILD_INDEX CedarBackup2.cli.VALID_ACTIONS CedarBackup2.cli-module.html#VALID_ACTIONS CedarBackup2.cli._setupFlowLogging CedarBackup2.cli-module.html#_setupFlowLogging CedarBackup2.cli._setupDiskFlowLogging CedarBackup2.cli-module.html#_setupDiskFlowLogging CedarBackup2.config CedarBackup2.config-module.html CedarBackup2.config.VALID_MEDIA_TYPES CedarBackup2.config-module.html#VALID_MEDIA_TYPES CedarBackup2.config.VALID_ORDER_MODES CedarBackup2.config-module.html#VALID_ORDER_MODES CedarBackup2.config.VALID_COLLECT_MODES CedarBackup2.config-module.html#VALID_COLLECT_MODES CedarBackup2.config.readBoolean CedarBackup2.xmlutil-module.html#readBoolean CedarBackup2.config.addByteQuantityNode CedarBackup2.config-module.html#addByteQuantityNode CedarBackup2.config.validateScsiId CedarBackup2.writers.util-module.html#validateScsiId CedarBackup2.config.REWRITABLE_MEDIA_TYPES CedarBackup2.config-module.html#REWRITABLE_MEDIA_TYPES CedarBackup2.config.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.config.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.config.VALID_ARCHIVE_MODES CedarBackup2.config-module.html#VALID_ARCHIVE_MODES CedarBackup2.config.serializeDom CedarBackup2.xmlutil-module.html#serializeDom CedarBackup2.config.DEFAULT_MEDIA_TYPE CedarBackup2.config-module.html#DEFAULT_MEDIA_TYPE CedarBackup2.config.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.config.VALID_CD_MEDIA_TYPES CedarBackup2.config-module.html#VALID_CD_MEDIA_TYPES CedarBackup2.config.__package__ CedarBackup2.config-module.html#__package__ CedarBackup2.config.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.config.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.config.checkUnique CedarBackup2.util-module.html#checkUnique CedarBackup2.config.readInteger CedarBackup2.xmlutil-module.html#readInteger CedarBackup2.config.parseCommaSeparatedString CedarBackup2.util-module.html#parseCommaSeparatedString CedarBackup2.config.isElement CedarBackup2.xmlutil-module.html#isElement CedarBackup2.config.logger CedarBackup2.config-module.html#logger CedarBackup2.config.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.config.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.config.VALID_DEVICE_TYPES CedarBackup2.config-module.html#VALID_DEVICE_TYPES CedarBackup2.config.DEFAULT_DEVICE_TYPE CedarBackup2.config-module.html#DEFAULT_DEVICE_TYPE CedarBackup2.config.addBooleanNode CedarBackup2.xmlutil-module.html#addBooleanNode CedarBackup2.config.readChildren CedarBackup2.xmlutil-module.html#readChildren CedarBackup2.config.VALID_FAILURE_MODES CedarBackup2.config-module.html#VALID_FAILURE_MODES CedarBackup2.config.VALID_BYTE_UNITS CedarBackup2.config-module.html#VALID_BYTE_UNITS CedarBackup2.config.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.config.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.config.VALID_BLANK_MODES CedarBackup2.config-module.html#VALID_BLANK_MODES CedarBackup2.config.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.config.VALID_COMPRESS_MODES CedarBackup2.config-module.html#VALID_COMPRESS_MODES CedarBackup2.config.ACTION_NAME_REGEX CedarBackup2.config-module.html#ACTION_NAME_REGEX CedarBackup2.config.createOutputDom CedarBackup2.xmlutil-module.html#createOutputDom CedarBackup2.config.VALID_DVD_MEDIA_TYPES CedarBackup2.config-module.html#VALID_DVD_MEDIA_TYPES CedarBackup2.config.readByteQuantity CedarBackup2.config-module.html#readByteQuantity CedarBackup2.config.addIntegerNode CedarBackup2.xmlutil-module.html#addIntegerNode CedarBackup2.customize CedarBackup2.customize-module.html CedarBackup2.customize.DEBIAN_MKISOFS CedarBackup2.customize-module.html#DEBIAN_MKISOFS CedarBackup2.customize.customizeOverrides CedarBackup2.customize-module.html#customizeOverrides CedarBackup2.customize.__package__ CedarBackup2.customize-module.html#__package__ CedarBackup2.customize.PLATFORM CedarBackup2.customize-module.html#PLATFORM CedarBackup2.customize.DEBIAN_CDRECORD CedarBackup2.customize-module.html#DEBIAN_CDRECORD CedarBackup2.customize.logger CedarBackup2.customize-module.html#logger CedarBackup2.extend CedarBackup2.extend-module.html CedarBackup2.extend.__package__ CedarBackup2.extend-module.html#__package__ CedarBackup2.extend.amazons3 CedarBackup2.extend.amazons3-module.html CedarBackup2.extend.amazons3.executeAction CedarBackup2.extend.amazons3-module.html#executeAction CedarBackup2.extend.amazons3.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.amazons3.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.amazons3.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.amazons3.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.extend.amazons3.SU_COMMAND CedarBackup2.extend.amazons3-module.html#SU_COMMAND CedarBackup2.extend.amazons3.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.extend.amazons3.readBoolean CedarBackup2.xmlutil-module.html#readBoolean CedarBackup2.extend.amazons3._verifyUpload CedarBackup2.extend.amazons3-module.html#_verifyUpload CedarBackup2.extend.amazons3.__package__ CedarBackup2.extend.amazons3-module.html#__package__ CedarBackup2.extend.amazons3.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.amazons3.logger CedarBackup2.extend.amazons3-module.html#logger CedarBackup2.extend.amazons3.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.extend.amazons3._applySizeLimits CedarBackup2.extend.amazons3-module.html#_applySizeLimits CedarBackup2.extend.amazons3.AWS_COMMAND CedarBackup2.extend.amazons3-module.html#AWS_COMMAND CedarBackup2.extend.amazons3._clearExistingBackup CedarBackup2.extend.amazons3-module.html#_clearExistingBackup CedarBackup2.extend.amazons3.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.amazons3.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.amazons3.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.amazons3._findCorrectDailyDir CedarBackup2.extend.amazons3-module.html#_findCorrectDailyDir CedarBackup2.extend.amazons3.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.amazons3._writeToAmazonS3 CedarBackup2.extend.amazons3-module.html#_writeToAmazonS3 CedarBackup2.extend.amazons3.STORE_INDICATOR CedarBackup2.extend.amazons3-module.html#STORE_INDICATOR CedarBackup2.extend.amazons3.addBooleanNode CedarBackup2.xmlutil-module.html#addBooleanNode CedarBackup2.extend.amazons3.readByteQuantity CedarBackup2.config-module.html#readByteQuantity CedarBackup2.extend.amazons3._encryptStagingDir CedarBackup2.extend.amazons3-module.html#_encryptStagingDir CedarBackup2.extend.amazons3.addByteQuantityNode CedarBackup2.config-module.html#addByteQuantityNode CedarBackup2.extend.amazons3._uploadStagingDir CedarBackup2.extend.amazons3-module.html#_uploadStagingDir CedarBackup2.extend.amazons3.isRunningAsRoot CedarBackup2.util-module.html#isRunningAsRoot CedarBackup2.extend.amazons3._writeStoreIndicator CedarBackup2.extend.amazons3-module.html#_writeStoreIndicator CedarBackup2.extend.capacity CedarBackup2.extend.capacity-module.html CedarBackup2.extend.capacity.readByteQuantity CedarBackup2.config-module.html#readByteQuantity CedarBackup2.extend.capacity.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.extend.capacity.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.capacity.executeAction CedarBackup2.extend.capacity-module.html#executeAction CedarBackup2.extend.capacity.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.capacity.__package__ CedarBackup2.extend.capacity-module.html#__package__ CedarBackup2.extend.capacity.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.capacity.addByteQuantityNode CedarBackup2.config-module.html#addByteQuantityNode CedarBackup2.extend.capacity.checkMediaState CedarBackup2.actions.util-module.html#checkMediaState CedarBackup2.extend.capacity.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.capacity.logger CedarBackup2.extend.capacity-module.html#logger CedarBackup2.extend.capacity.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.capacity.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.extend.encrypt CedarBackup2.extend.encrypt-module.html CedarBackup2.extend.encrypt.executeAction CedarBackup2.extend.encrypt-module.html#executeAction CedarBackup2.extend.encrypt.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.encrypt.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.encrypt.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.extend.encrypt._encryptFile CedarBackup2.extend.encrypt-module.html#_encryptFile CedarBackup2.extend.encrypt.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.encrypt.__package__ CedarBackup2.extend.encrypt-module.html#__package__ CedarBackup2.extend.encrypt.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.encrypt.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.encrypt._encryptDailyDir CedarBackup2.extend.encrypt-module.html#_encryptDailyDir CedarBackup2.extend.encrypt.findDailyDirs CedarBackup2.actions.util-module.html#findDailyDirs CedarBackup2.extend.encrypt.logger CedarBackup2.extend.encrypt-module.html#logger CedarBackup2.extend.encrypt.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.encrypt.getBackupFiles CedarBackup2.actions.util-module.html#getBackupFiles CedarBackup2.extend.encrypt.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.encrypt.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.encrypt.VALID_ENCRYPT_MODES CedarBackup2.extend.encrypt-module.html#VALID_ENCRYPT_MODES CedarBackup2.extend.encrypt._confirmGpgRecipient CedarBackup2.extend.encrypt-module.html#_confirmGpgRecipient CedarBackup2.extend.encrypt.GPG_COMMAND CedarBackup2.extend.encrypt-module.html#GPG_COMMAND CedarBackup2.extend.encrypt.ENCRYPT_INDICATOR CedarBackup2.extend.encrypt-module.html#ENCRYPT_INDICATOR CedarBackup2.extend.encrypt._encryptFileWithGpg CedarBackup2.extend.encrypt-module.html#_encryptFileWithGpg CedarBackup2.extend.mbox CedarBackup2.extend.mbox-module.html CedarBackup2.extend.mbox._getTarfilePath CedarBackup2.extend.mbox-module.html#_getTarfilePath CedarBackup2.extend.mbox._getCollectMode CedarBackup2.extend.mbox-module.html#_getCollectMode CedarBackup2.extend.mbox._getExclusions CedarBackup2.extend.mbox-module.html#_getExclusions CedarBackup2.extend.mbox.executeAction CedarBackup2.extend.mbox-module.html#executeAction CedarBackup2.extend.mbox._getOutputFile CedarBackup2.extend.mbox-module.html#_getOutputFile CedarBackup2.extend.mbox.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.mbox.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.mbox.GREPMAIL_COMMAND CedarBackup2.extend.mbox-module.html#GREPMAIL_COMMAND CedarBackup2.extend.mbox.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.extend.mbox.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.mbox._getRevisionPath CedarBackup2.extend.mbox-module.html#_getRevisionPath CedarBackup2.extend.mbox.__package__ CedarBackup2.extend.mbox-module.html#__package__ CedarBackup2.extend.mbox.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.mbox.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.mbox.isElement CedarBackup2.xmlutil-module.html#isElement CedarBackup2.extend.mbox.logger CedarBackup2.extend.mbox-module.html#logger CedarBackup2.extend.mbox._backupMboxDir CedarBackup2.extend.mbox-module.html#_backupMboxDir CedarBackup2.extend.mbox._backupMboxFile CedarBackup2.extend.mbox-module.html#_backupMboxFile CedarBackup2.extend.mbox.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.mbox.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.mbox._getBackupPath CedarBackup2.extend.mbox-module.html#_getBackupPath CedarBackup2.extend.mbox.readChildren CedarBackup2.xmlutil-module.html#readChildren CedarBackup2.extend.mbox.buildNormalizedPath CedarBackup2.util-module.html#buildNormalizedPath CedarBackup2.extend.mbox.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.extend.mbox.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.mbox._getCompressMode CedarBackup2.extend.mbox-module.html#_getCompressMode CedarBackup2.extend.mbox._writeNewRevision CedarBackup2.extend.mbox-module.html#_writeNewRevision CedarBackup2.extend.mbox._loadLastRevision CedarBackup2.extend.mbox-module.html#_loadLastRevision CedarBackup2.extend.mbox.REVISION_PATH_EXTENSION CedarBackup2.extend.mbox-module.html#REVISION_PATH_EXTENSION CedarBackup2.extend.mbox.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.extend.mysql CedarBackup2.extend.mysql-module.html CedarBackup2.extend.mysql.executeAction CedarBackup2.extend.mysql-module.html#executeAction CedarBackup2.extend.mysql.MYSQLDUMP_COMMAND CedarBackup2.extend.mysql-module.html#MYSQLDUMP_COMMAND CedarBackup2.extend.mysql.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.mysql.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.extend.mysql.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.mysql.readBoolean CedarBackup2.xmlutil-module.html#readBoolean CedarBackup2.extend.mysql.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.mysql.__package__ CedarBackup2.extend.mysql-module.html#__package__ CedarBackup2.extend.mysql.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.mysql.logger CedarBackup2.extend.mysql-module.html#logger CedarBackup2.extend.mysql.backupDatabase CedarBackup2.extend.mysql-module.html#backupDatabase CedarBackup2.extend.mysql._getOutputFile CedarBackup2.extend.mysql-module.html#_getOutputFile CedarBackup2.extend.mysql.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.mysql.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.mysql.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.mysql.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.mysql._backupDatabase CedarBackup2.extend.mysql-module.html#_backupDatabase CedarBackup2.extend.mysql.addBooleanNode CedarBackup2.xmlutil-module.html#addBooleanNode CedarBackup2.extend.postgresql CedarBackup2.extend.postgresql-module.html CedarBackup2.extend.postgresql.executeAction CedarBackup2.extend.postgresql-module.html#executeAction CedarBackup2.extend.postgresql.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.postgresql.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.extend.postgresql.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.postgresql.readBoolean CedarBackup2.xmlutil-module.html#readBoolean CedarBackup2.extend.postgresql.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.postgresql.__package__ CedarBackup2.extend.postgresql-module.html#__package__ CedarBackup2.extend.postgresql.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.postgresql.logger CedarBackup2.extend.postgresql-module.html#logger CedarBackup2.extend.postgresql.backupDatabase CedarBackup2.extend.postgresql-module.html#backupDatabase CedarBackup2.extend.postgresql._getOutputFile CedarBackup2.extend.postgresql-module.html#_getOutputFile CedarBackup2.extend.postgresql.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.postgresql.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.postgresql.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.postgresql.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.postgresql.POSTGRESQLDUMP_COMMAND CedarBackup2.extend.postgresql-module.html#POSTGRESQLDUMP_COMMAND CedarBackup2.extend.postgresql._backupDatabase CedarBackup2.extend.postgresql-module.html#_backupDatabase CedarBackup2.extend.postgresql.addBooleanNode CedarBackup2.xmlutil-module.html#addBooleanNode CedarBackup2.extend.postgresql.POSTGRESQLDUMPALL_COMMAND CedarBackup2.extend.postgresql-module.html#POSTGRESQLDUMPALL_COMMAND CedarBackup2.extend.split CedarBackup2.extend.split-module.html CedarBackup2.extend.split.executeAction CedarBackup2.extend.split-module.html#executeAction CedarBackup2.extend.split.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.split.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.extend.split._splitFile CedarBackup2.extend.split-module.html#_splitFile CedarBackup2.extend.split.SPLIT_COMMAND CedarBackup2.extend.split-module.html#SPLIT_COMMAND CedarBackup2.extend.split.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.split.__package__ CedarBackup2.extend.split-module.html#__package__ CedarBackup2.extend.split.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.split.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.split.logger CedarBackup2.extend.split-module.html#logger CedarBackup2.extend.split._splitDailyDir CedarBackup2.extend.split-module.html#_splitDailyDir CedarBackup2.extend.split.findDailyDirs CedarBackup2.actions.util-module.html#findDailyDirs CedarBackup2.extend.split.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.split.getBackupFiles CedarBackup2.actions.util-module.html#getBackupFiles CedarBackup2.extend.split.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.split.SPLIT_INDICATOR CedarBackup2.extend.split-module.html#SPLIT_INDICATOR CedarBackup2.extend.split.addByteQuantityNode CedarBackup2.config-module.html#addByteQuantityNode CedarBackup2.extend.split.readByteQuantity CedarBackup2.config-module.html#readByteQuantity CedarBackup2.extend.subversion CedarBackup2.extend.subversion-module.html CedarBackup2.extend.subversion._getCollectMode CedarBackup2.extend.subversion-module.html#_getCollectMode CedarBackup2.extend.subversion.SVNADMIN_COMMAND CedarBackup2.extend.subversion-module.html#SVNADMIN_COMMAND CedarBackup2.extend.subversion._getExclusions CedarBackup2.extend.subversion-module.html#_getExclusions CedarBackup2.extend.subversion.executeAction CedarBackup2.extend.subversion-module.html#executeAction CedarBackup2.extend.subversion.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.extend.subversion.SVNLOOK_COMMAND CedarBackup2.extend.subversion-module.html#SVNLOOK_COMMAND CedarBackup2.extend.subversion.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.extend.subversion.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.extend.subversion.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.extend.subversion._getRepositoryPaths CedarBackup2.extend.subversion-module.html#_getRepositoryPaths CedarBackup2.extend.subversion._getRevisionPath CedarBackup2.extend.subversion-module.html#_getRevisionPath CedarBackup2.extend.subversion.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.extend.subversion.__package__ CedarBackup2.extend.subversion-module.html#__package__ CedarBackup2.extend.subversion.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.extend.subversion.isElement CedarBackup2.xmlutil-module.html#isElement CedarBackup2.extend.subversion.logger CedarBackup2.extend.subversion-module.html#logger CedarBackup2.extend.subversion.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.subversion._getOutputFile CedarBackup2.extend.subversion-module.html#_getOutputFile CedarBackup2.extend.subversion.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.subversion.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.subversion.backupRepository CedarBackup2.extend.subversion-module.html#backupRepository CedarBackup2.extend.subversion._getBackupPath CedarBackup2.extend.subversion-module.html#_getBackupPath CedarBackup2.extend.subversion.readChildren CedarBackup2.xmlutil-module.html#readChildren CedarBackup2.extend.subversion.buildNormalizedPath CedarBackup2.util-module.html#buildNormalizedPath CedarBackup2.extend.subversion.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.extend.subversion.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.extend.subversion.getYoungestRevision CedarBackup2.extend.subversion-module.html#getYoungestRevision CedarBackup2.extend.subversion._writeLastRevision CedarBackup2.extend.subversion-module.html#_writeLastRevision CedarBackup2.extend.subversion._getCompressMode CedarBackup2.extend.subversion-module.html#_getCompressMode CedarBackup2.extend.subversion.backupBDBRepository CedarBackup2.extend.subversion-module.html#backupBDBRepository CedarBackup2.extend.subversion._backupRepository CedarBackup2.extend.subversion-module.html#_backupRepository CedarBackup2.extend.subversion._loadLastRevision CedarBackup2.extend.subversion-module.html#_loadLastRevision CedarBackup2.extend.subversion.REVISION_PATH_EXTENSION CedarBackup2.extend.subversion-module.html#REVISION_PATH_EXTENSION CedarBackup2.extend.subversion.backupFSFSRepository CedarBackup2.extend.subversion-module.html#backupFSFSRepository CedarBackup2.extend.sysinfo CedarBackup2.extend.sysinfo-module.html CedarBackup2.extend.sysinfo._getOutputFile CedarBackup2.extend.sysinfo-module.html#_getOutputFile CedarBackup2.extend.sysinfo.logger CedarBackup2.extend.sysinfo-module.html#logger CedarBackup2.extend.sysinfo.DPKG_PATH CedarBackup2.extend.sysinfo-module.html#DPKG_PATH CedarBackup2.extend.sysinfo.FDISK_COMMAND CedarBackup2.extend.sysinfo-module.html#FDISK_COMMAND CedarBackup2.extend.sysinfo.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.extend.sysinfo._dumpPartitionTable CedarBackup2.extend.sysinfo-module.html#_dumpPartitionTable CedarBackup2.extend.sysinfo.executeAction CedarBackup2.extend.sysinfo-module.html#executeAction CedarBackup2.extend.sysinfo.DPKG_COMMAND CedarBackup2.extend.sysinfo-module.html#DPKG_COMMAND CedarBackup2.extend.sysinfo.LS_COMMAND CedarBackup2.extend.sysinfo-module.html#LS_COMMAND CedarBackup2.extend.sysinfo._dumpFilesystemContents CedarBackup2.extend.sysinfo-module.html#_dumpFilesystemContents CedarBackup2.extend.sysinfo.__package__ CedarBackup2.extend.sysinfo-module.html#__package__ CedarBackup2.extend.sysinfo.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.extend.sysinfo._dumpDebianPackages CedarBackup2.extend.sysinfo-module.html#_dumpDebianPackages CedarBackup2.extend.sysinfo.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.extend.sysinfo.FDISK_PATH CedarBackup2.extend.sysinfo-module.html#FDISK_PATH CedarBackup2.filesystem CedarBackup2.filesystem-module.html CedarBackup2.filesystem.normalizeDir CedarBackup2.filesystem-module.html#normalizeDir CedarBackup2.filesystem.firstFit CedarBackup2.knapsack-module.html#firstFit CedarBackup2.filesystem.calculateFileAge CedarBackup2.util-module.html#calculateFileAge CedarBackup2.filesystem.removeKeys CedarBackup2.util-module.html#removeKeys CedarBackup2.filesystem.alternateFit CedarBackup2.knapsack-module.html#alternateFit CedarBackup2.filesystem.__package__ CedarBackup2.filesystem-module.html#__package__ CedarBackup2.filesystem.worstFit CedarBackup2.knapsack-module.html#worstFit CedarBackup2.filesystem.logger CedarBackup2.filesystem-module.html#logger CedarBackup2.filesystem.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.filesystem.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.filesystem.bestFit CedarBackup2.knapsack-module.html#bestFit CedarBackup2.filesystem.compareDigestMaps CedarBackup2.filesystem-module.html#compareDigestMaps CedarBackup2.filesystem.compareContents CedarBackup2.filesystem-module.html#compareContents CedarBackup2.filesystem.dereferenceLink CedarBackup2.util-module.html#dereferenceLink CedarBackup2.image CedarBackup2.image-module.html CedarBackup2.image.__package__ CedarBackup2.image-module.html#__package__ CedarBackup2.knapsack CedarBackup2.knapsack-module.html CedarBackup2.knapsack.bestFit CedarBackup2.knapsack-module.html#bestFit CedarBackup2.knapsack.firstFit CedarBackup2.knapsack-module.html#firstFit CedarBackup2.knapsack.alternateFit CedarBackup2.knapsack-module.html#alternateFit CedarBackup2.knapsack.worstFit CedarBackup2.knapsack-module.html#worstFit CedarBackup2.knapsack.__package__ CedarBackup2.knapsack-module.html#__package__ CedarBackup2.peer CedarBackup2.peer-module.html CedarBackup2.peer.SU_COMMAND CedarBackup2.peer-module.html#SU_COMMAND CedarBackup2.peer.DEF_CBACK_COMMAND CedarBackup2.peer-module.html#DEF_CBACK_COMMAND CedarBackup2.peer.DEF_RSH_COMMAND CedarBackup2.peer-module.html#DEF_RSH_COMMAND CedarBackup2.peer.DEF_STAGE_INDICATOR CedarBackup2.peer-module.html#DEF_STAGE_INDICATOR CedarBackup2.peer.splitCommandLine CedarBackup2.util-module.html#splitCommandLine CedarBackup2.peer.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.peer.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.peer.__package__ CedarBackup2.peer-module.html#__package__ CedarBackup2.peer.DEF_COLLECT_INDICATOR CedarBackup2.peer-module.html#DEF_COLLECT_INDICATOR CedarBackup2.peer.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.peer.DEF_RCP_COMMAND CedarBackup2.peer-module.html#DEF_RCP_COMMAND CedarBackup2.peer.isRunningAsRoot CedarBackup2.util-module.html#isRunningAsRoot CedarBackup2.peer.logger CedarBackup2.peer-module.html#logger CedarBackup2.release CedarBackup2.release-module.html CedarBackup2.release.COPYRIGHT CedarBackup2.release-module.html#COPYRIGHT CedarBackup2.release.AUTHOR CedarBackup2.release-module.html#AUTHOR CedarBackup2.release.URL CedarBackup2.release-module.html#URL CedarBackup2.release.__package__ CedarBackup2.release-module.html#__package__ CedarBackup2.release.VERSION CedarBackup2.release-module.html#VERSION CedarBackup2.release.DATE CedarBackup2.release-module.html#DATE CedarBackup2.release.EMAIL CedarBackup2.release-module.html#EMAIL CedarBackup2.testutil CedarBackup2.testutil-module.html CedarBackup2.testutil.changeFileAge CedarBackup2.testutil-module.html#changeFileAge CedarBackup2.testutil.platformCygwin CedarBackup2.testutil-module.html#platformCygwin CedarBackup2.testutil.platformHasEcho CedarBackup2.testutil-module.html#platformHasEcho CedarBackup2.testutil.randomFilename CedarBackup2.testutil-module.html#randomFilename CedarBackup2.testutil.getLogin CedarBackup2.testutil-module.html#getLogin CedarBackup2.testutil.buildPath CedarBackup2.testutil-module.html#buildPath CedarBackup2.testutil._isPlatform CedarBackup2.testutil-module.html#_isPlatform CedarBackup2.testutil.platformDebian CedarBackup2.testutil-module.html#platformDebian CedarBackup2.testutil.setupPathResolver CedarBackup2.cli-module.html#setupPathResolver CedarBackup2.testutil.platformSupportsPermissions CedarBackup2.testutil-module.html#platformSupportsPermissions CedarBackup2.testutil.findResources CedarBackup2.testutil-module.html#findResources CedarBackup2.testutil.customizeOverrides CedarBackup2.customize-module.html#customizeOverrides CedarBackup2.testutil.captureOutput CedarBackup2.testutil-module.html#captureOutput CedarBackup2.testutil.setupDebugLogger CedarBackup2.testutil-module.html#setupDebugLogger CedarBackup2.testutil.__package__ CedarBackup2.testutil-module.html#__package__ CedarBackup2.testutil.extractTar CedarBackup2.testutil-module.html#extractTar CedarBackup2.testutil.platformRequiresBinaryRead CedarBackup2.testutil-module.html#platformRequiresBinaryRead CedarBackup2.testutil.platformSupportsLinks CedarBackup2.testutil-module.html#platformSupportsLinks CedarBackup2.testutil.commandAvailable CedarBackup2.testutil-module.html#commandAvailable CedarBackup2.testutil.platformMacOsX CedarBackup2.testutil-module.html#platformMacOsX CedarBackup2.testutil.getMaskAsMode CedarBackup2.testutil-module.html#getMaskAsMode CedarBackup2.testutil.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.testutil.removedir CedarBackup2.testutil-module.html#removedir CedarBackup2.testutil.availableLocales CedarBackup2.testutil-module.html#availableLocales CedarBackup2.testutil.setupOverrides CedarBackup2.testutil-module.html#setupOverrides CedarBackup2.testutil.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.testutil.runningAsRoot CedarBackup2.testutil-module.html#runningAsRoot CedarBackup2.testutil.platformWindows CedarBackup2.testutil-module.html#platformWindows CedarBackup2.testutil.failUnlessAssignRaises CedarBackup2.testutil-module.html#failUnlessAssignRaises CedarBackup2.testutil.hexFloatLiteralAllowed CedarBackup2.testutil-module.html#hexFloatLiteralAllowed CedarBackup2.tools CedarBackup2.tools-module.html CedarBackup2.tools.__package__ CedarBackup2.tools-module.html#__package__ CedarBackup2.tools.amazons3 CedarBackup2.tools.amazons3-module.html CedarBackup2.tools.amazons3._buildSourceFiles CedarBackup2.tools.amazons3-module.html#_buildSourceFiles CedarBackup2.tools.amazons3.LONG_SWITCHES CedarBackup2.tools.amazons3-module.html#LONG_SWITCHES CedarBackup2.tools.amazons3._usage CedarBackup2.tools.amazons3-module.html#_usage CedarBackup2.tools.amazons3.__package__ CedarBackup2.tools.amazons3-module.html#__package__ CedarBackup2.tools.amazons3.cli CedarBackup2.tools.amazons3-module.html#cli CedarBackup2.tools.amazons3._synchronizeBucket CedarBackup2.tools.amazons3-module.html#_synchronizeBucket CedarBackup2.tools.amazons3._executeAction CedarBackup2.tools.amazons3-module.html#_executeAction CedarBackup2.tools.amazons3.logger CedarBackup2.tools.amazons3-module.html#logger CedarBackup2.tools.amazons3.splitCommandLine CedarBackup2.util-module.html#splitCommandLine CedarBackup2.tools.amazons3.AWS_COMMAND CedarBackup2.tools.amazons3-module.html#AWS_COMMAND CedarBackup2.tools.amazons3.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.tools.amazons3._checkSourceFiles CedarBackup2.tools.amazons3-module.html#_checkSourceFiles CedarBackup2.tools.amazons3.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.tools.amazons3.setupLogging CedarBackup2.cli-module.html#setupLogging CedarBackup2.tools.amazons3._diagnostics CedarBackup2.tools.amazons3-module.html#_diagnostics CedarBackup2.tools.amazons3.SHORT_SWITCHES CedarBackup2.tools.amazons3-module.html#SHORT_SWITCHES CedarBackup2.tools.amazons3._version CedarBackup2.tools.amazons3-module.html#_version CedarBackup2.tools.amazons3._verifyBucketContents CedarBackup2.tools.amazons3-module.html#_verifyBucketContents CedarBackup2.tools.span CedarBackup2.tools.span-module.html CedarBackup2.tools.span._writeDisc CedarBackup2.tools.span-module.html#_writeDisc CedarBackup2.tools.span.normalizeDir CedarBackup2.filesystem-module.html#normalizeDir CedarBackup2.tools.span._discWriteImage CedarBackup2.tools.span-module.html#_discWriteImage CedarBackup2.tools.span.compareDigestMaps CedarBackup2.filesystem-module.html#compareDigestMaps CedarBackup2.tools.span._getFloat CedarBackup2.tools.span-module.html#_getFloat CedarBackup2.tools.span._getReturn CedarBackup2.tools.span-module.html#_getReturn CedarBackup2.tools.span._usage CedarBackup2.tools.span-module.html#_usage CedarBackup2.tools.span._getChoiceAnswer CedarBackup2.tools.span-module.html#_getChoiceAnswer CedarBackup2.tools.span.unmount CedarBackup2.util-module.html#unmount CedarBackup2.tools.span._discConsistencyCheck CedarBackup2.tools.span-module.html#_discConsistencyCheck CedarBackup2.tools.span.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.tools.span._findDailyDirs CedarBackup2.tools.span-module.html#_findDailyDirs CedarBackup2.tools.span.__package__ CedarBackup2.tools.span-module.html#__package__ CedarBackup2.tools.span._executeAction CedarBackup2.tools.span-module.html#_executeAction CedarBackup2.tools.span._discInitializeImage CedarBackup2.tools.span-module.html#_discInitializeImage CedarBackup2.tools.span.setupLogging CedarBackup2.cli-module.html#setupLogging CedarBackup2.tools.span._getWriter CedarBackup2.tools.span-module.html#_getWriter CedarBackup2.tools.span.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.tools.span.findDailyDirs CedarBackup2.actions.util-module.html#findDailyDirs CedarBackup2.tools.span.logger CedarBackup2.tools.span-module.html#logger CedarBackup2.tools.span._consistencyCheck CedarBackup2.tools.span-module.html#_consistencyCheck CedarBackup2.tools.span._getYesNoAnswer CedarBackup2.tools.span-module.html#_getYesNoAnswer CedarBackup2.tools.span.cli CedarBackup2.tools.span-module.html#cli CedarBackup2.tools.span.createWriter CedarBackup2.actions.util-module.html#createWriter CedarBackup2.tools.span._diagnostics CedarBackup2.tools.span-module.html#_diagnostics CedarBackup2.tools.span._version CedarBackup2.tools.span-module.html#_version CedarBackup2.tools.span.setupPathResolver CedarBackup2.cli-module.html#setupPathResolver CedarBackup2.tools.span.writeIndicatorFile CedarBackup2.actions.util-module.html#writeIndicatorFile CedarBackup2.tools.span.mount CedarBackup2.util-module.html#mount CedarBackup2.tools.span._writeStoreIndicator CedarBackup2.tools.span-module.html#_writeStoreIndicator CedarBackup2.util CedarBackup2.util-module.html CedarBackup2.util.SECONDS_PER_DAY CedarBackup2.util-module.html#SECONDS_PER_DAY CedarBackup2.util.unmount CedarBackup2.util-module.html#unmount CedarBackup2.util.UNIT_BYTES CedarBackup2.util-module.html#UNIT_BYTES CedarBackup2.util.parseCommaSeparatedString CedarBackup2.util-module.html#parseCommaSeparatedString CedarBackup2.util.UNIT_SECTORS CedarBackup2.util-module.html#UNIT_SECTORS CedarBackup2.util.getUidGid CedarBackup2.util-module.html#getUidGid CedarBackup2.util._UID_GID_AVAILABLE CedarBackup2.util-module.html#_UID_GID_AVAILABLE CedarBackup2.util.getFunctionReference CedarBackup2.util-module.html#getFunctionReference CedarBackup2.util.deriveDayOfWeek CedarBackup2.util-module.html#deriveDayOfWeek CedarBackup2.util.HOURS_PER_DAY CedarBackup2.util-module.html#HOURS_PER_DAY CedarBackup2.util.BYTES_PER_MBYTE CedarBackup2.util-module.html#BYTES_PER_MBYTE CedarBackup2.util.removeKeys CedarBackup2.util-module.html#removeKeys CedarBackup2.util.deviceMounted CedarBackup2.util-module.html#deviceMounted CedarBackup2.util.isStartOfWeek CedarBackup2.util-module.html#isStartOfWeek CedarBackup2.util.buildNormalizedPath CedarBackup2.util-module.html#buildNormalizedPath CedarBackup2.util.sanitizeEnvironment CedarBackup2.util-module.html#sanitizeEnvironment CedarBackup2.util.UNIT_MBYTES CedarBackup2.util-module.html#UNIT_MBYTES CedarBackup2.util.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.util.UNIT_KBYTES CedarBackup2.util-module.html#UNIT_KBYTES CedarBackup2.util.DEFAULT_LANGUAGE CedarBackup2.util-module.html#DEFAULT_LANGUAGE CedarBackup2.util.UNIT_GBYTES CedarBackup2.util-module.html#UNIT_GBYTES CedarBackup2.util.__package__ CedarBackup2.util-module.html#__package__ CedarBackup2.util.nullDevice CedarBackup2.util-module.html#nullDevice CedarBackup2.util.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.util.UMOUNT_COMMAND CedarBackup2.util-module.html#UMOUNT_COMMAND CedarBackup2.util.MBYTES_PER_GBYTE CedarBackup2.util-module.html#MBYTES_PER_GBYTE CedarBackup2.util.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.util.MOUNT_COMMAND CedarBackup2.util-module.html#MOUNT_COMMAND CedarBackup2.util.logger CedarBackup2.util-module.html#logger CedarBackup2.util.changeOwnership CedarBackup2.util-module.html#changeOwnership CedarBackup2.util.SECONDS_PER_MINUTE CedarBackup2.util-module.html#SECONDS_PER_MINUTE CedarBackup2.util.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.util.LOCALE_VARS CedarBackup2.util-module.html#LOCALE_VARS CedarBackup2.util.MTAB_FILE CedarBackup2.util-module.html#MTAB_FILE CedarBackup2.util.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.util.BYTES_PER_SECTOR CedarBackup2.util-module.html#BYTES_PER_SECTOR CedarBackup2.util.KBYTES_PER_MBYTE CedarBackup2.util-module.html#KBYTES_PER_MBYTE CedarBackup2.util.LANG_VAR CedarBackup2.util-module.html#LANG_VAR CedarBackup2.util.MINUTES_PER_HOUR CedarBackup2.util-module.html#MINUTES_PER_HOUR CedarBackup2.util.BYTES_PER_KBYTE CedarBackup2.util-module.html#BYTES_PER_KBYTE CedarBackup2.util.sortDict CedarBackup2.util-module.html#sortDict CedarBackup2.util.isRunningAsRoot CedarBackup2.util-module.html#isRunningAsRoot CedarBackup2.util.splitCommandLine CedarBackup2.util-module.html#splitCommandLine CedarBackup2.util.outputLogger CedarBackup2.util-module.html#outputLogger CedarBackup2.util.BYTES_PER_GBYTE CedarBackup2.util-module.html#BYTES_PER_GBYTE CedarBackup2.util.calculateFileAge CedarBackup2.util-module.html#calculateFileAge CedarBackup2.util.checkUnique CedarBackup2.util-module.html#checkUnique CedarBackup2.util.ISO_SECTOR_SIZE CedarBackup2.util-module.html#ISO_SECTOR_SIZE CedarBackup2.util.mount CedarBackup2.util-module.html#mount CedarBackup2.util.dereferenceLink CedarBackup2.util-module.html#dereferenceLink CedarBackup2.writer CedarBackup2.writer-module.html CedarBackup2.writer.validateScsiId CedarBackup2.writers.util-module.html#validateScsiId CedarBackup2.writer.__package__ CedarBackup2.writer-module.html#__package__ CedarBackup2.writer.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.writers CedarBackup2.writers-module.html CedarBackup2.writers.__package__ CedarBackup2.writers-module.html#__package__ CedarBackup2.writers.cdwriter CedarBackup2.writers.cdwriter-module.html CedarBackup2.writers.cdwriter.validateScsiId CedarBackup2.writers.util-module.html#validateScsiId CedarBackup2.writers.cdwriter.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.writers.cdwriter.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.writers.cdwriter.MEDIA_CDRW_80 CedarBackup2.writers.cdwriter-module.html#MEDIA_CDRW_80 CedarBackup2.writers.cdwriter.__package__ CedarBackup2.writers.cdwriter-module.html#__package__ CedarBackup2.writers.cdwriter.CDRECORD_COMMAND CedarBackup2.writers.cdwriter-module.html#CDRECORD_COMMAND CedarBackup2.writers.cdwriter.logger CedarBackup2.writers.cdwriter-module.html#logger CedarBackup2.writers.cdwriter.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.writers.cdwriter.EJECT_COMMAND CedarBackup2.writers.cdwriter-module.html#EJECT_COMMAND CedarBackup2.writers.cdwriter.validateDevice CedarBackup2.writers.util-module.html#validateDevice CedarBackup2.writers.cdwriter.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.writers.cdwriter.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.writers.cdwriter.MEDIA_CDRW_74 CedarBackup2.writers.cdwriter-module.html#MEDIA_CDRW_74 CedarBackup2.writers.cdwriter.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.writers.cdwriter.MKISOFS_COMMAND CedarBackup2.writers.cdwriter-module.html#MKISOFS_COMMAND CedarBackup2.writers.cdwriter.MEDIA_CDR_80 CedarBackup2.writers.cdwriter-module.html#MEDIA_CDR_80 CedarBackup2.writers.cdwriter.MEDIA_CDR_74 CedarBackup2.writers.cdwriter-module.html#MEDIA_CDR_74 CedarBackup2.writers.dvdwriter CedarBackup2.writers.dvdwriter-module.html CedarBackup2.writers.dvdwriter.MEDIA_DVDPLUSR CedarBackup2.writers.dvdwriter-module.html#MEDIA_DVDPLUSR CedarBackup2.writers.dvdwriter.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.writers.dvdwriter.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.writers.dvdwriter.__package__ CedarBackup2.writers.dvdwriter-module.html#__package__ CedarBackup2.writers.dvdwriter.logger CedarBackup2.writers.dvdwriter-module.html#logger CedarBackup2.writers.dvdwriter.displayBytes CedarBackup2.util-module.html#displayBytes CedarBackup2.writers.dvdwriter.EJECT_COMMAND CedarBackup2.writers.dvdwriter-module.html#EJECT_COMMAND CedarBackup2.writers.dvdwriter.MEDIA_DVDPLUSRW CedarBackup2.writers.dvdwriter-module.html#MEDIA_DVDPLUSRW CedarBackup2.writers.dvdwriter.validateDevice CedarBackup2.writers.util-module.html#validateDevice CedarBackup2.writers.dvdwriter.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.writers.dvdwriter.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.writers.dvdwriter.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.writers.dvdwriter.GROWISOFS_COMMAND CedarBackup2.writers.dvdwriter-module.html#GROWISOFS_COMMAND CedarBackup2.writers.util CedarBackup2.writers.util-module.html CedarBackup2.writers.util.validateDevice CedarBackup2.writers.util-module.html#validateDevice CedarBackup2.writers.util.convertSize CedarBackup2.util-module.html#convertSize CedarBackup2.writers.util.executeCommand CedarBackup2.util-module.html#executeCommand CedarBackup2.writers.util.VOLNAME_COMMAND CedarBackup2.writers.util-module.html#VOLNAME_COMMAND CedarBackup2.writers.util.validateScsiId CedarBackup2.writers.util-module.html#validateScsiId CedarBackup2.writers.util.__package__ CedarBackup2.writers.util-module.html#__package__ CedarBackup2.writers.util.readMediaLabel CedarBackup2.writers.util-module.html#readMediaLabel CedarBackup2.writers.util.resolveCommand CedarBackup2.util-module.html#resolveCommand CedarBackup2.writers.util.encodePath CedarBackup2.util-module.html#encodePath CedarBackup2.writers.util.logger CedarBackup2.writers.util-module.html#logger CedarBackup2.writers.util.MKISOFS_COMMAND CedarBackup2.writers.util-module.html#MKISOFS_COMMAND CedarBackup2.writers.util.validateDriveSpeed CedarBackup2.writers.util-module.html#validateDriveSpeed CedarBackup2.xmlutil CedarBackup2.xmlutil-module.html CedarBackup2.xmlutil.readFloat CedarBackup2.xmlutil-module.html#readFloat CedarBackup2.xmlutil.addLongNode CedarBackup2.xmlutil-module.html#addLongNode CedarBackup2.xmlutil.readFirstChild CedarBackup2.xmlutil-module.html#readFirstChild CedarBackup2.xmlutil._translateCDATAAttr CedarBackup2.xmlutil-module.html#_translateCDATAAttr CedarBackup2.xmlutil.TRUE_BOOLEAN_VALUES CedarBackup2.xmlutil-module.html#TRUE_BOOLEAN_VALUES CedarBackup2.xmlutil.readStringList CedarBackup2.xmlutil-module.html#readStringList CedarBackup2.xmlutil.addStringNode CedarBackup2.xmlutil-module.html#addStringNode CedarBackup2.xmlutil.serializeDom CedarBackup2.xmlutil-module.html#serializeDom CedarBackup2.xmlutil.readInteger CedarBackup2.xmlutil-module.html#readInteger CedarBackup2.xmlutil.VALID_BOOLEAN_VALUES CedarBackup2.xmlutil-module.html#VALID_BOOLEAN_VALUES CedarBackup2.xmlutil.readBoolean CedarBackup2.xmlutil-module.html#readBoolean CedarBackup2.xmlutil.addContainerNode CedarBackup2.xmlutil-module.html#addContainerNode CedarBackup2.xmlutil.__package__ CedarBackup2.xmlutil-module.html#__package__ CedarBackup2.xmlutil.createInputDom CedarBackup2.xmlutil-module.html#createInputDom CedarBackup2.xmlutil.isElement CedarBackup2.xmlutil-module.html#isElement CedarBackup2.xmlutil.logger CedarBackup2.xmlutil-module.html#logger CedarBackup2.xmlutil._encodeText CedarBackup2.xmlutil-module.html#_encodeText CedarBackup2.xmlutil.readChildren CedarBackup2.xmlutil-module.html#readChildren CedarBackup2.xmlutil.FALSE_BOOLEAN_VALUES CedarBackup2.xmlutil-module.html#FALSE_BOOLEAN_VALUES CedarBackup2.xmlutil.readString CedarBackup2.xmlutil-module.html#readString CedarBackup2.xmlutil.createOutputDom CedarBackup2.xmlutil-module.html#createOutputDom CedarBackup2.xmlutil.addBooleanNode CedarBackup2.xmlutil-module.html#addBooleanNode CedarBackup2.xmlutil.readLong CedarBackup2.xmlutil-module.html#readLong CedarBackup2.xmlutil.addIntegerNode CedarBackup2.xmlutil-module.html#addIntegerNode CedarBackup2.xmlutil._translateCDATA CedarBackup2.xmlutil-module.html#_translateCDATA CedarBackup2.cli.Options CedarBackup2.cli.Options-class.html CedarBackup2.cli.Options._getMode CedarBackup2.cli.Options-class.html#_getMode CedarBackup2.cli.Options.stacktrace CedarBackup2.cli.Options-class.html#stacktrace CedarBackup2.cli.Options.managed CedarBackup2.cli.Options-class.html#managed CedarBackup2.cli.Options.help CedarBackup2.cli.Options-class.html#help CedarBackup2.cli.Options._getFull CedarBackup2.cli.Options-class.html#_getFull CedarBackup2.cli.Options.__str__ CedarBackup2.cli.Options-class.html#__str__ CedarBackup2.cli.Options._setStacktrace CedarBackup2.cli.Options-class.html#_setStacktrace CedarBackup2.cli.Options.actions CedarBackup2.cli.Options-class.html#actions CedarBackup2.cli.Options.owner CedarBackup2.cli.Options-class.html#owner CedarBackup2.cli.Options._setQuiet CedarBackup2.cli.Options-class.html#_setQuiet CedarBackup2.cli.Options._setVersion CedarBackup2.cli.Options-class.html#_setVersion CedarBackup2.cli.Options._getVerbose CedarBackup2.cli.Options-class.html#_getVerbose CedarBackup2.cli.Options.verbose CedarBackup2.cli.Options-class.html#verbose CedarBackup2.cli.Options._setHelp CedarBackup2.cli.Options-class.html#_setHelp CedarBackup2.cli.Options._getDebug CedarBackup2.cli.Options-class.html#_getDebug CedarBackup2.cli.Options.debug CedarBackup2.cli.Options-class.html#debug CedarBackup2.cli.Options._parseArgumentList CedarBackup2.cli.Options-class.html#_parseArgumentList CedarBackup2.cli.Options.buildArgumentList CedarBackup2.cli.Options-class.html#buildArgumentList CedarBackup2.cli.Options._getManagedOnly CedarBackup2.cli.Options-class.html#_getManagedOnly CedarBackup2.cli.Options.__cmp__ CedarBackup2.cli.Options-class.html#__cmp__ CedarBackup2.cli.Options._getStacktrace CedarBackup2.cli.Options-class.html#_getStacktrace CedarBackup2.cli.Options._setOwner CedarBackup2.cli.Options-class.html#_setOwner CedarBackup2.cli.Options._setMode CedarBackup2.cli.Options-class.html#_setMode CedarBackup2.cli.Options.__init__ CedarBackup2.cli.Options-class.html#__init__ CedarBackup2.cli.Options._getQuiet CedarBackup2.cli.Options-class.html#_getQuiet CedarBackup2.cli.Options.managedOnly CedarBackup2.cli.Options-class.html#managedOnly CedarBackup2.cli.Options._setDebug CedarBackup2.cli.Options-class.html#_setDebug CedarBackup2.cli.Options.config CedarBackup2.cli.Options-class.html#config CedarBackup2.cli.Options.mode CedarBackup2.cli.Options-class.html#mode CedarBackup2.cli.Options._getVersion CedarBackup2.cli.Options-class.html#_getVersion CedarBackup2.cli.Options._getLogfile CedarBackup2.cli.Options-class.html#_getLogfile CedarBackup2.cli.Options.full CedarBackup2.cli.Options-class.html#full CedarBackup2.cli.Options._getConfig CedarBackup2.cli.Options-class.html#_getConfig CedarBackup2.cli.Options._setOutput CedarBackup2.cli.Options-class.html#_setOutput CedarBackup2.cli.Options._setFull CedarBackup2.cli.Options-class.html#_setFull CedarBackup2.cli.Options.version CedarBackup2.cli.Options-class.html#version CedarBackup2.cli.Options._setManagedOnly CedarBackup2.cli.Options-class.html#_setManagedOnly CedarBackup2.cli.Options._setDiagnostics CedarBackup2.cli.Options-class.html#_setDiagnostics CedarBackup2.cli.Options.output CedarBackup2.cli.Options-class.html#output CedarBackup2.cli.Options.validate CedarBackup2.cli.Options-class.html#validate CedarBackup2.cli.Options.logfile CedarBackup2.cli.Options-class.html#logfile CedarBackup2.cli.Options.buildArgumentString CedarBackup2.cli.Options-class.html#buildArgumentString CedarBackup2.cli.Options._getManaged CedarBackup2.cli.Options-class.html#_getManaged CedarBackup2.cli.Options._setManaged CedarBackup2.cli.Options-class.html#_setManaged CedarBackup2.cli.Options._setActions CedarBackup2.cli.Options-class.html#_setActions CedarBackup2.cli.Options._getOutput CedarBackup2.cli.Options-class.html#_getOutput CedarBackup2.cli.Options._getOwner CedarBackup2.cli.Options-class.html#_getOwner CedarBackup2.cli.Options._setLogfile CedarBackup2.cli.Options-class.html#_setLogfile CedarBackup2.cli.Options.quiet CedarBackup2.cli.Options-class.html#quiet CedarBackup2.cli.Options.__repr__ CedarBackup2.cli.Options-class.html#__repr__ CedarBackup2.cli.Options.diagnostics CedarBackup2.cli.Options-class.html#diagnostics CedarBackup2.cli.Options._getDiagnostics CedarBackup2.cli.Options-class.html#_getDiagnostics CedarBackup2.cli.Options._setConfig CedarBackup2.cli.Options-class.html#_setConfig CedarBackup2.cli.Options._setVerbose CedarBackup2.cli.Options-class.html#_setVerbose CedarBackup2.cli.Options._getHelp CedarBackup2.cli.Options-class.html#_getHelp CedarBackup2.cli.Options._getActions CedarBackup2.cli.Options-class.html#_getActions CedarBackup2.cli._ActionItem CedarBackup2.cli._ActionItem-class.html CedarBackup2.cli._ActionItem.executeAction CedarBackup2.cli._ActionItem-class.html#executeAction CedarBackup2.cli._ActionItem.__cmp__ CedarBackup2.cli._ActionItem-class.html#__cmp__ CedarBackup2.cli._ActionItem._executeAction CedarBackup2.cli._ActionItem-class.html#_executeAction CedarBackup2.cli._ActionItem.SORT_ORDER CedarBackup2.cli._ActionItem-class.html#SORT_ORDER CedarBackup2.cli._ActionItem._executeHook CedarBackup2.cli._ActionItem-class.html#_executeHook CedarBackup2.cli._ActionItem.__init__ CedarBackup2.cli._ActionItem-class.html#__init__ CedarBackup2.cli._ActionSet CedarBackup2.cli._ActionSet-class.html CedarBackup2.cli._ActionSet._validateActions CedarBackup2.cli._ActionSet-class.html#_validateActions CedarBackup2.cli._ActionSet._deriveHooks CedarBackup2.cli._ActionSet-class.html#_deriveHooks CedarBackup2.cli._ActionSet.__init__ CedarBackup2.cli._ActionSet-class.html#__init__ CedarBackup2.cli._ActionSet._getCbackCommand CedarBackup2.cli._ActionSet-class.html#_getCbackCommand CedarBackup2.cli._ActionSet.executeActions CedarBackup2.cli._ActionSet-class.html#executeActions CedarBackup2.cli._ActionSet._buildIndexMap CedarBackup2.cli._ActionSet-class.html#_buildIndexMap CedarBackup2.cli._ActionSet._buildHookMaps CedarBackup2.cli._ActionSet-class.html#_buildHookMaps CedarBackup2.cli._ActionSet._buildActionMap CedarBackup2.cli._ActionSet-class.html#_buildActionMap CedarBackup2.cli._ActionSet._buildFunctionMap CedarBackup2.cli._ActionSet-class.html#_buildFunctionMap CedarBackup2.cli._ActionSet._buildPeerMap CedarBackup2.cli._ActionSet-class.html#_buildPeerMap CedarBackup2.cli._ActionSet._getManagedActions CedarBackup2.cli._ActionSet-class.html#_getManagedActions CedarBackup2.cli._ActionSet._getRemoteUser CedarBackup2.cli._ActionSet-class.html#_getRemoteUser CedarBackup2.cli._ActionSet._deriveExtensionNames CedarBackup2.cli._ActionSet-class.html#_deriveExtensionNames CedarBackup2.cli._ActionSet._getRshCommand CedarBackup2.cli._ActionSet-class.html#_getRshCommand CedarBackup2.cli._ActionSet._buildActionSet CedarBackup2.cli._ActionSet-class.html#_buildActionSet CedarBackup2.cli._ManagedActionItem CedarBackup2.cli._ManagedActionItem-class.html CedarBackup2.cli._ManagedActionItem.executeAction CedarBackup2.cli._ManagedActionItem-class.html#executeAction CedarBackup2.cli._ManagedActionItem.__cmp__ CedarBackup2.cli._ManagedActionItem-class.html#__cmp__ CedarBackup2.cli._ManagedActionItem.SORT_ORDER CedarBackup2.cli._ManagedActionItem-class.html#SORT_ORDER CedarBackup2.cli._ManagedActionItem.__init__ CedarBackup2.cli._ManagedActionItem-class.html#__init__ CedarBackup2.config.ActionDependencies CedarBackup2.config.ActionDependencies-class.html CedarBackup2.config.ActionDependencies._setAfterList CedarBackup2.config.ActionDependencies-class.html#_setAfterList CedarBackup2.config.ActionDependencies._getAfterList CedarBackup2.config.ActionDependencies-class.html#_getAfterList CedarBackup2.config.ActionDependencies.__str__ CedarBackup2.config.ActionDependencies-class.html#__str__ CedarBackup2.config.ActionDependencies.beforeList CedarBackup2.config.ActionDependencies-class.html#beforeList CedarBackup2.config.ActionDependencies.__cmp__ CedarBackup2.config.ActionDependencies-class.html#__cmp__ CedarBackup2.config.ActionDependencies.__repr__ CedarBackup2.config.ActionDependencies-class.html#__repr__ CedarBackup2.config.ActionDependencies._getBeforeList CedarBackup2.config.ActionDependencies-class.html#_getBeforeList CedarBackup2.config.ActionDependencies._setBeforeList CedarBackup2.config.ActionDependencies-class.html#_setBeforeList CedarBackup2.config.ActionDependencies.afterList CedarBackup2.config.ActionDependencies-class.html#afterList CedarBackup2.config.ActionDependencies.__init__ CedarBackup2.config.ActionDependencies-class.html#__init__ CedarBackup2.config.ActionHook CedarBackup2.config.ActionHook-class.html CedarBackup2.config.ActionHook.__str__ CedarBackup2.config.ActionHook-class.html#__str__ CedarBackup2.config.ActionHook._getAction CedarBackup2.config.ActionHook-class.html#_getAction CedarBackup2.config.ActionHook.__init__ CedarBackup2.config.ActionHook-class.html#__init__ CedarBackup2.config.ActionHook._getCommand CedarBackup2.config.ActionHook-class.html#_getCommand CedarBackup2.config.ActionHook._getBefore CedarBackup2.config.ActionHook-class.html#_getBefore CedarBackup2.config.ActionHook._setAction CedarBackup2.config.ActionHook-class.html#_setAction CedarBackup2.config.ActionHook.__cmp__ CedarBackup2.config.ActionHook-class.html#__cmp__ CedarBackup2.config.ActionHook._getAfter CedarBackup2.config.ActionHook-class.html#_getAfter CedarBackup2.config.ActionHook.before CedarBackup2.config.ActionHook-class.html#before CedarBackup2.config.ActionHook.after CedarBackup2.config.ActionHook-class.html#after CedarBackup2.config.ActionHook._setCommand CedarBackup2.config.ActionHook-class.html#_setCommand CedarBackup2.config.ActionHook.command CedarBackup2.config.ActionHook-class.html#command CedarBackup2.config.ActionHook.__repr__ CedarBackup2.config.ActionHook-class.html#__repr__ CedarBackup2.config.ActionHook.action CedarBackup2.config.ActionHook-class.html#action CedarBackup2.config.BlankBehavior CedarBackup2.config.BlankBehavior-class.html CedarBackup2.config.BlankBehavior._setBlankFactor CedarBackup2.config.BlankBehavior-class.html#_setBlankFactor CedarBackup2.config.BlankBehavior.__str__ CedarBackup2.config.BlankBehavior-class.html#__str__ CedarBackup2.config.BlankBehavior._getBlankFactor CedarBackup2.config.BlankBehavior-class.html#_getBlankFactor CedarBackup2.config.BlankBehavior._setBlankMode CedarBackup2.config.BlankBehavior-class.html#_setBlankMode CedarBackup2.config.BlankBehavior.__cmp__ CedarBackup2.config.BlankBehavior-class.html#__cmp__ CedarBackup2.config.BlankBehavior.blankFactor CedarBackup2.config.BlankBehavior-class.html#blankFactor CedarBackup2.config.BlankBehavior.__repr__ CedarBackup2.config.BlankBehavior-class.html#__repr__ CedarBackup2.config.BlankBehavior.blankMode CedarBackup2.config.BlankBehavior-class.html#blankMode CedarBackup2.config.BlankBehavior._getBlankMode CedarBackup2.config.BlankBehavior-class.html#_getBlankMode CedarBackup2.config.BlankBehavior.__init__ CedarBackup2.config.BlankBehavior-class.html#__init__ CedarBackup2.config.ByteQuantity CedarBackup2.config.ByteQuantity-class.html CedarBackup2.config.ByteQuantity._setQuantity CedarBackup2.config.ByteQuantity-class.html#_setQuantity CedarBackup2.config.ByteQuantity._getBytes CedarBackup2.config.ByteQuantity-class.html#_getBytes CedarBackup2.config.ByteQuantity.__str__ CedarBackup2.config.ByteQuantity-class.html#__str__ CedarBackup2.config.ByteQuantity.__init__ CedarBackup2.config.ByteQuantity-class.html#__init__ CedarBackup2.config.ByteQuantity.__cmp__ CedarBackup2.config.ByteQuantity-class.html#__cmp__ CedarBackup2.config.ByteQuantity._getQuantity CedarBackup2.config.ByteQuantity-class.html#_getQuantity CedarBackup2.config.ByteQuantity.units CedarBackup2.config.ByteQuantity-class.html#units CedarBackup2.config.ByteQuantity._getUnits CedarBackup2.config.ByteQuantity-class.html#_getUnits CedarBackup2.config.ByteQuantity._setUnits CedarBackup2.config.ByteQuantity-class.html#_setUnits CedarBackup2.config.ByteQuantity.bytes CedarBackup2.config.ByteQuantity-class.html#bytes CedarBackup2.config.ByteQuantity.__repr__ CedarBackup2.config.ByteQuantity-class.html#__repr__ CedarBackup2.config.ByteQuantity.quantity CedarBackup2.config.ByteQuantity-class.html#quantity CedarBackup2.config.CollectConfig CedarBackup2.config.CollectConfig-class.html CedarBackup2.config.CollectConfig._getCollectMode CedarBackup2.config.CollectConfig-class.html#_getCollectMode CedarBackup2.config.CollectConfig._getArchiveMode CedarBackup2.config.CollectConfig-class.html#_getArchiveMode CedarBackup2.config.CollectConfig.__str__ CedarBackup2.config.CollectConfig-class.html#__str__ CedarBackup2.config.CollectConfig._setArchiveMode CedarBackup2.config.CollectConfig-class.html#_setArchiveMode CedarBackup2.config.CollectConfig._setExcludePatterns CedarBackup2.config.CollectConfig-class.html#_setExcludePatterns CedarBackup2.config.CollectConfig.collectDirs CedarBackup2.config.CollectConfig-class.html#collectDirs CedarBackup2.config.CollectConfig._getCollectFiles CedarBackup2.config.CollectConfig-class.html#_getCollectFiles CedarBackup2.config.CollectConfig.collectFiles CedarBackup2.config.CollectConfig-class.html#collectFiles CedarBackup2.config.CollectConfig.__init__ CedarBackup2.config.CollectConfig-class.html#__init__ CedarBackup2.config.CollectConfig._setCollectMode CedarBackup2.config.CollectConfig-class.html#_setCollectMode CedarBackup2.config.CollectConfig.archiveMode CedarBackup2.config.CollectConfig-class.html#archiveMode CedarBackup2.config.CollectConfig._getTargetDir CedarBackup2.config.CollectConfig-class.html#_getTargetDir CedarBackup2.config.CollectConfig.__cmp__ CedarBackup2.config.CollectConfig-class.html#__cmp__ CedarBackup2.config.CollectConfig._setIgnoreFile CedarBackup2.config.CollectConfig-class.html#_setIgnoreFile CedarBackup2.config.CollectConfig.absoluteExcludePaths CedarBackup2.config.CollectConfig-class.html#absoluteExcludePaths CedarBackup2.config.CollectConfig._getCollectDirs CedarBackup2.config.CollectConfig-class.html#_getCollectDirs CedarBackup2.config.CollectConfig.ignoreFile CedarBackup2.config.CollectConfig-class.html#ignoreFile CedarBackup2.config.CollectConfig._setCollectFiles CedarBackup2.config.CollectConfig-class.html#_setCollectFiles CedarBackup2.config.CollectConfig._setAbsoluteExcludePaths CedarBackup2.config.CollectConfig-class.html#_setAbsoluteExcludePaths CedarBackup2.config.CollectConfig._setCollectDirs CedarBackup2.config.CollectConfig-class.html#_setCollectDirs CedarBackup2.config.CollectConfig._getIgnoreFile CedarBackup2.config.CollectConfig-class.html#_getIgnoreFile CedarBackup2.config.CollectConfig._getAbsoluteExcludePaths CedarBackup2.config.CollectConfig-class.html#_getAbsoluteExcludePaths CedarBackup2.config.CollectConfig.collectMode CedarBackup2.config.CollectConfig-class.html#collectMode CedarBackup2.config.CollectConfig._getExcludePatterns CedarBackup2.config.CollectConfig-class.html#_getExcludePatterns CedarBackup2.config.CollectConfig.excludePatterns CedarBackup2.config.CollectConfig-class.html#excludePatterns CedarBackup2.config.CollectConfig.targetDir CedarBackup2.config.CollectConfig-class.html#targetDir CedarBackup2.config.CollectConfig.__repr__ CedarBackup2.config.CollectConfig-class.html#__repr__ CedarBackup2.config.CollectConfig._setTargetDir CedarBackup2.config.CollectConfig-class.html#_setTargetDir CedarBackup2.config.CollectDir CedarBackup2.config.CollectDir-class.html CedarBackup2.config.CollectDir._getCollectMode CedarBackup2.config.CollectDir-class.html#_getCollectMode CedarBackup2.config.CollectDir._getArchiveMode CedarBackup2.config.CollectDir-class.html#_getArchiveMode CedarBackup2.config.CollectDir.archiveMode CedarBackup2.config.CollectDir-class.html#archiveMode CedarBackup2.config.CollectDir.__str__ CedarBackup2.config.CollectDir-class.html#__str__ CedarBackup2.config.CollectDir._getAbsolutePath CedarBackup2.config.CollectDir-class.html#_getAbsolutePath CedarBackup2.config.CollectDir._setExcludePatterns CedarBackup2.config.CollectDir-class.html#_setExcludePatterns CedarBackup2.config.CollectDir.__init__ CedarBackup2.config.CollectDir-class.html#__init__ CedarBackup2.config.CollectDir._setCollectMode CedarBackup2.config.CollectDir-class.html#_setCollectMode CedarBackup2.config.CollectDir._setLinkDepth CedarBackup2.config.CollectDir-class.html#_setLinkDepth CedarBackup2.config.CollectDir.recursionLevel CedarBackup2.config.CollectDir-class.html#recursionLevel CedarBackup2.config.CollectDir.absolutePath CedarBackup2.config.CollectDir-class.html#absolutePath CedarBackup2.config.CollectDir.__cmp__ CedarBackup2.config.CollectDir-class.html#__cmp__ CedarBackup2.config.CollectDir._setIgnoreFile CedarBackup2.config.CollectDir-class.html#_setIgnoreFile CedarBackup2.config.CollectDir.absoluteExcludePaths CedarBackup2.config.CollectDir-class.html#absoluteExcludePaths CedarBackup2.config.CollectDir.relativeExcludePaths CedarBackup2.config.CollectDir-class.html#relativeExcludePaths CedarBackup2.config.CollectDir._setArchiveMode CedarBackup2.config.CollectDir-class.html#_setArchiveMode CedarBackup2.config.CollectDir._getDereference CedarBackup2.config.CollectDir-class.html#_getDereference CedarBackup2.config.CollectDir.ignoreFile CedarBackup2.config.CollectDir-class.html#ignoreFile CedarBackup2.config.CollectDir._getLinkDepth CedarBackup2.config.CollectDir-class.html#_getLinkDepth CedarBackup2.config.CollectDir.dereference CedarBackup2.config.CollectDir-class.html#dereference CedarBackup2.config.CollectDir._setAbsoluteExcludePaths CedarBackup2.config.CollectDir-class.html#_setAbsoluteExcludePaths CedarBackup2.config.CollectDir.linkDepth CedarBackup2.config.CollectDir-class.html#linkDepth CedarBackup2.config.CollectDir._getRelativeExcludePaths CedarBackup2.config.CollectDir-class.html#_getRelativeExcludePaths CedarBackup2.config.CollectDir._setRecursionLevel CedarBackup2.config.CollectDir-class.html#_setRecursionLevel CedarBackup2.config.CollectDir._getRecursionLevel CedarBackup2.config.CollectDir-class.html#_getRecursionLevel CedarBackup2.config.CollectDir._setDereference CedarBackup2.config.CollectDir-class.html#_setDereference CedarBackup2.config.CollectDir._getIgnoreFile CedarBackup2.config.CollectDir-class.html#_getIgnoreFile CedarBackup2.config.CollectDir._getAbsoluteExcludePaths CedarBackup2.config.CollectDir-class.html#_getAbsoluteExcludePaths CedarBackup2.config.CollectDir.collectMode CedarBackup2.config.CollectDir-class.html#collectMode CedarBackup2.config.CollectDir._setRelativeExcludePaths CedarBackup2.config.CollectDir-class.html#_setRelativeExcludePaths CedarBackup2.config.CollectDir.excludePatterns CedarBackup2.config.CollectDir-class.html#excludePatterns CedarBackup2.config.CollectDir._setAbsolutePath CedarBackup2.config.CollectDir-class.html#_setAbsolutePath CedarBackup2.config.CollectDir._getExcludePatterns CedarBackup2.config.CollectDir-class.html#_getExcludePatterns CedarBackup2.config.CollectDir.__repr__ CedarBackup2.config.CollectDir-class.html#__repr__ CedarBackup2.config.CollectFile CedarBackup2.config.CollectFile-class.html CedarBackup2.config.CollectFile._getCollectMode CedarBackup2.config.CollectFile-class.html#_getCollectMode CedarBackup2.config.CollectFile._getArchiveMode CedarBackup2.config.CollectFile-class.html#_getArchiveMode CedarBackup2.config.CollectFile.__str__ CedarBackup2.config.CollectFile-class.html#__str__ CedarBackup2.config.CollectFile._setArchiveMode CedarBackup2.config.CollectFile-class.html#_setArchiveMode CedarBackup2.config.CollectFile.__init__ CedarBackup2.config.CollectFile-class.html#__init__ CedarBackup2.config.CollectFile._setCollectMode CedarBackup2.config.CollectFile-class.html#_setCollectMode CedarBackup2.config.CollectFile.archiveMode CedarBackup2.config.CollectFile-class.html#archiveMode CedarBackup2.config.CollectFile.absolutePath CedarBackup2.config.CollectFile-class.html#absolutePath CedarBackup2.config.CollectFile.__cmp__ CedarBackup2.config.CollectFile-class.html#__cmp__ CedarBackup2.config.CollectFile._getAbsolutePath CedarBackup2.config.CollectFile-class.html#_getAbsolutePath CedarBackup2.config.CollectFile.collectMode CedarBackup2.config.CollectFile-class.html#collectMode CedarBackup2.config.CollectFile._setAbsolutePath CedarBackup2.config.CollectFile-class.html#_setAbsolutePath CedarBackup2.config.CollectFile.__repr__ CedarBackup2.config.CollectFile-class.html#__repr__ CedarBackup2.config.CommandOverride CedarBackup2.config.CommandOverride-class.html CedarBackup2.config.CommandOverride.__str__ CedarBackup2.config.CommandOverride-class.html#__str__ CedarBackup2.config.CommandOverride._getAbsolutePath CedarBackup2.config.CommandOverride-class.html#_getAbsolutePath CedarBackup2.config.CommandOverride.absolutePath CedarBackup2.config.CommandOverride-class.html#absolutePath CedarBackup2.config.CommandOverride.__cmp__ CedarBackup2.config.CommandOverride-class.html#__cmp__ CedarBackup2.config.CommandOverride._setCommand CedarBackup2.config.CommandOverride-class.html#_setCommand CedarBackup2.config.CommandOverride.command CedarBackup2.config.CommandOverride-class.html#command CedarBackup2.config.CommandOverride.__repr__ CedarBackup2.config.CommandOverride-class.html#__repr__ CedarBackup2.config.CommandOverride._setAbsolutePath CedarBackup2.config.CommandOverride-class.html#_setAbsolutePath CedarBackup2.config.CommandOverride.__init__ CedarBackup2.config.CommandOverride-class.html#__init__ CedarBackup2.config.CommandOverride._getCommand CedarBackup2.config.CommandOverride-class.html#_getCommand CedarBackup2.config.Config CedarBackup2.config.Config-class.html CedarBackup2.config.Config._addCollect CedarBackup2.config.Config-class.html#_addCollect CedarBackup2.config.Config.extractXml CedarBackup2.config.Config-class.html#extractXml CedarBackup2.config.Config._addStage CedarBackup2.config.Config-class.html#_addStage CedarBackup2.config.Config._getReference CedarBackup2.config.Config-class.html#_getReference CedarBackup2.config.Config.__str__ CedarBackup2.config.Config-class.html#__str__ CedarBackup2.config.Config._validateStage CedarBackup2.config.Config-class.html#_validateStage CedarBackup2.config.Config._addOptions CedarBackup2.config.Config-class.html#_addOptions CedarBackup2.config.Config._validatePurge CedarBackup2.config.Config-class.html#_validatePurge CedarBackup2.config.Config._parseXmlData CedarBackup2.config.Config-class.html#_parseXmlData CedarBackup2.config.Config._parseOverrides CedarBackup2.config.Config-class.html#_parseOverrides CedarBackup2.config.Config._setStore CedarBackup2.config.Config-class.html#_setStore CedarBackup2.config.Config._addReference CedarBackup2.config.Config-class.html#_addReference CedarBackup2.config.Config.__cmp__ CedarBackup2.config.Config-class.html#__cmp__ CedarBackup2.config.Config._validateStore CedarBackup2.config.Config-class.html#_validateStore CedarBackup2.config.Config._setPurge CedarBackup2.config.Config-class.html#_setPurge CedarBackup2.config.Config._validateExtensions CedarBackup2.config.Config-class.html#_validateExtensions CedarBackup2.config.Config._addExtendedAction CedarBackup2.config.Config-class.html#_addExtendedAction CedarBackup2.config.Config.collect CedarBackup2.config.Config-class.html#collect CedarBackup2.config.Config._validateContents CedarBackup2.config.Config-class.html#_validateContents CedarBackup2.config.Config.reference CedarBackup2.config.Config-class.html#reference CedarBackup2.config.Config._validateReference CedarBackup2.config.Config-class.html#_validateReference CedarBackup2.config.Config._addPeers CedarBackup2.config.Config-class.html#_addPeers CedarBackup2.config.Config._getOptions CedarBackup2.config.Config-class.html#_getOptions CedarBackup2.config.Config._validateOptions CedarBackup2.config.Config-class.html#_validateOptions CedarBackup2.config.Config._parseBlankBehavior CedarBackup2.config.Config-class.html#_parseBlankBehavior CedarBackup2.config.Config._getStage CedarBackup2.config.Config-class.html#_getStage CedarBackup2.config.Config._setCollect CedarBackup2.config.Config-class.html#_setCollect CedarBackup2.config.Config._parseReference CedarBackup2.config.Config-class.html#_parseReference CedarBackup2.config.Config._addLocalPeer CedarBackup2.config.Config-class.html#_addLocalPeer CedarBackup2.config.Config._parseExtensions CedarBackup2.config.Config-class.html#_parseExtensions CedarBackup2.config.Config._validatePeers CedarBackup2.config.Config-class.html#_validatePeers CedarBackup2.config.Config.stage CedarBackup2.config.Config-class.html#stage CedarBackup2.config.Config._getExtensions CedarBackup2.config.Config-class.html#_getExtensions CedarBackup2.config.Config._parseExclusions CedarBackup2.config.Config-class.html#_parseExclusions CedarBackup2.config.Config._parseStage CedarBackup2.config.Config-class.html#_parseStage CedarBackup2.config.Config._parseCollectDirs CedarBackup2.config.Config-class.html#_parseCollectDirs CedarBackup2.config.Config.extensions CedarBackup2.config.Config-class.html#extensions CedarBackup2.config.Config._addBlankBehavior CedarBackup2.config.Config-class.html#_addBlankBehavior CedarBackup2.config.Config._parseDependencies CedarBackup2.config.Config-class.html#_parseDependencies CedarBackup2.config.Config.options CedarBackup2.config.Config-class.html#options CedarBackup2.config.Config.__repr__ CedarBackup2.config.Config-class.html#__repr__ CedarBackup2.config.Config._parsePeers CedarBackup2.config.Config-class.html#_parsePeers CedarBackup2.config.Config._addCollectFile CedarBackup2.config.Config-class.html#_addCollectFile CedarBackup2.config.Config._parsePeerList CedarBackup2.config.Config-class.html#_parsePeerList CedarBackup2.config.Config._extractXml CedarBackup2.config.Config-class.html#_extractXml CedarBackup2.config.Config._validatePeerList CedarBackup2.config.Config-class.html#_validatePeerList CedarBackup2.config.Config._buildCommaSeparatedString CedarBackup2.config.Config-class.html#_buildCommaSeparatedString CedarBackup2.config.Config._addHook CedarBackup2.config.Config-class.html#_addHook CedarBackup2.config.Config._getCollect CedarBackup2.config.Config-class.html#_getCollect CedarBackup2.config.Config._parseHooks CedarBackup2.config.Config-class.html#_parseHooks CedarBackup2.config.Config._parseStore CedarBackup2.config.Config-class.html#_parseStore CedarBackup2.config.Config._setPeers CedarBackup2.config.Config-class.html#_setPeers CedarBackup2.config.Config._parseOptions CedarBackup2.config.Config-class.html#_parseOptions CedarBackup2.config.Config._getPeers CedarBackup2.config.Config-class.html#_getPeers CedarBackup2.config.Config._addStore CedarBackup2.config.Config-class.html#_addStore CedarBackup2.config.Config._addExtensions CedarBackup2.config.Config-class.html#_addExtensions CedarBackup2.config.Config.purge CedarBackup2.config.Config-class.html#purge CedarBackup2.config.Config.store CedarBackup2.config.Config-class.html#store CedarBackup2.config.Config._addOverride CedarBackup2.config.Config-class.html#_addOverride CedarBackup2.config.Config._addPurgeDir CedarBackup2.config.Config-class.html#_addPurgeDir CedarBackup2.config.Config._addDependencies CedarBackup2.config.Config-class.html#_addDependencies CedarBackup2.config.Config._addCollectDir CedarBackup2.config.Config-class.html#_addCollectDir CedarBackup2.config.Config._parsePurge CedarBackup2.config.Config-class.html#_parsePurge CedarBackup2.config.Config._addRemotePeer CedarBackup2.config.Config-class.html#_addRemotePeer CedarBackup2.config.Config.__init__ CedarBackup2.config.Config-class.html#__init__ CedarBackup2.config.Config._addPurge CedarBackup2.config.Config-class.html#_addPurge CedarBackup2.config.Config._setExtensions CedarBackup2.config.Config-class.html#_setExtensions CedarBackup2.config.Config._parsePurgeDirs CedarBackup2.config.Config-class.html#_parsePurgeDirs CedarBackup2.config.Config._parseCollect CedarBackup2.config.Config-class.html#_parseCollect CedarBackup2.config.Config._getStore CedarBackup2.config.Config-class.html#_getStore CedarBackup2.config.Config._setStage CedarBackup2.config.Config-class.html#_setStage CedarBackup2.config.Config._validateCollect CedarBackup2.config.Config-class.html#_validateCollect CedarBackup2.config.Config._getPurge CedarBackup2.config.Config-class.html#_getPurge CedarBackup2.config.Config.validate CedarBackup2.config.Config-class.html#validate CedarBackup2.config.Config._parseExtendedActions CedarBackup2.config.Config-class.html#_parseExtendedActions CedarBackup2.config.Config.peers CedarBackup2.config.Config-class.html#peers CedarBackup2.config.Config._parseCollectFiles CedarBackup2.config.Config-class.html#_parseCollectFiles CedarBackup2.config.Config._setOptions CedarBackup2.config.Config-class.html#_setOptions CedarBackup2.config.Config._setReference CedarBackup2.config.Config-class.html#_setReference CedarBackup2.config.ExtendedAction CedarBackup2.config.ExtendedAction-class.html CedarBackup2.config.ExtendedAction._getModule CedarBackup2.config.ExtendedAction-class.html#_getModule CedarBackup2.config.ExtendedAction.__str__ CedarBackup2.config.ExtendedAction-class.html#__str__ CedarBackup2.config.ExtendedAction.module CedarBackup2.config.ExtendedAction-class.html#module CedarBackup2.config.ExtendedAction._getName CedarBackup2.config.ExtendedAction-class.html#_getName CedarBackup2.config.ExtendedAction.__init__ CedarBackup2.config.ExtendedAction-class.html#__init__ CedarBackup2.config.ExtendedAction.index CedarBackup2.config.ExtendedAction-class.html#index CedarBackup2.config.ExtendedAction.__cmp__ CedarBackup2.config.ExtendedAction-class.html#__cmp__ CedarBackup2.config.ExtendedAction._getDependencies CedarBackup2.config.ExtendedAction-class.html#_getDependencies CedarBackup2.config.ExtendedAction.function CedarBackup2.config.ExtendedAction-class.html#function CedarBackup2.config.ExtendedAction._setIndex CedarBackup2.config.ExtendedAction-class.html#_setIndex CedarBackup2.config.ExtendedAction._getFunction CedarBackup2.config.ExtendedAction-class.html#_getFunction CedarBackup2.config.ExtendedAction._setDependencies CedarBackup2.config.ExtendedAction-class.html#_setDependencies CedarBackup2.config.ExtendedAction.dependencies CedarBackup2.config.ExtendedAction-class.html#dependencies CedarBackup2.config.ExtendedAction._setModule CedarBackup2.config.ExtendedAction-class.html#_setModule CedarBackup2.config.ExtendedAction._getIndex CedarBackup2.config.ExtendedAction-class.html#_getIndex CedarBackup2.config.ExtendedAction._setFunction CedarBackup2.config.ExtendedAction-class.html#_setFunction CedarBackup2.config.ExtendedAction.name CedarBackup2.config.ExtendedAction-class.html#name CedarBackup2.config.ExtendedAction.__repr__ CedarBackup2.config.ExtendedAction-class.html#__repr__ CedarBackup2.config.ExtendedAction._setName CedarBackup2.config.ExtendedAction-class.html#_setName CedarBackup2.config.ExtensionsConfig CedarBackup2.config.ExtensionsConfig-class.html CedarBackup2.config.ExtensionsConfig.orderMode CedarBackup2.config.ExtensionsConfig-class.html#orderMode CedarBackup2.config.ExtensionsConfig.__str__ CedarBackup2.config.ExtensionsConfig-class.html#__str__ CedarBackup2.config.ExtensionsConfig.actions CedarBackup2.config.ExtensionsConfig-class.html#actions CedarBackup2.config.ExtensionsConfig.__cmp__ CedarBackup2.config.ExtensionsConfig-class.html#__cmp__ CedarBackup2.config.ExtensionsConfig._setActions CedarBackup2.config.ExtensionsConfig-class.html#_setActions CedarBackup2.config.ExtensionsConfig._setOrderMode CedarBackup2.config.ExtensionsConfig-class.html#_setOrderMode CedarBackup2.config.ExtensionsConfig.__repr__ CedarBackup2.config.ExtensionsConfig-class.html#__repr__ CedarBackup2.config.ExtensionsConfig._getOrderMode CedarBackup2.config.ExtensionsConfig-class.html#_getOrderMode CedarBackup2.config.ExtensionsConfig._getActions CedarBackup2.config.ExtensionsConfig-class.html#_getActions CedarBackup2.config.ExtensionsConfig.__init__ CedarBackup2.config.ExtensionsConfig-class.html#__init__ CedarBackup2.config.LocalPeer CedarBackup2.config.LocalPeer-class.html CedarBackup2.config.LocalPeer.__str__ CedarBackup2.config.LocalPeer-class.html#__str__ CedarBackup2.config.LocalPeer._setIgnoreFailureMode CedarBackup2.config.LocalPeer-class.html#_setIgnoreFailureMode CedarBackup2.config.LocalPeer._getName CedarBackup2.config.LocalPeer-class.html#_getName CedarBackup2.config.LocalPeer.__init__ CedarBackup2.config.LocalPeer-class.html#__init__ CedarBackup2.config.LocalPeer.__cmp__ CedarBackup2.config.LocalPeer-class.html#__cmp__ CedarBackup2.config.LocalPeer._getIgnoreFailureMode CedarBackup2.config.LocalPeer-class.html#_getIgnoreFailureMode CedarBackup2.config.LocalPeer.ignoreFailureMode CedarBackup2.config.LocalPeer-class.html#ignoreFailureMode CedarBackup2.config.LocalPeer._getCollectDir CedarBackup2.config.LocalPeer-class.html#_getCollectDir CedarBackup2.config.LocalPeer.name CedarBackup2.config.LocalPeer-class.html#name CedarBackup2.config.LocalPeer.collectDir CedarBackup2.config.LocalPeer-class.html#collectDir CedarBackup2.config.LocalPeer._setCollectDir CedarBackup2.config.LocalPeer-class.html#_setCollectDir CedarBackup2.config.LocalPeer.__repr__ CedarBackup2.config.LocalPeer-class.html#__repr__ CedarBackup2.config.LocalPeer._setName CedarBackup2.config.LocalPeer-class.html#_setName CedarBackup2.config.OptionsConfig CedarBackup2.config.OptionsConfig-class.html CedarBackup2.config.OptionsConfig._getRcpCommand CedarBackup2.config.OptionsConfig-class.html#_getRcpCommand CedarBackup2.config.OptionsConfig._getWorkingDir CedarBackup2.config.OptionsConfig-class.html#_getWorkingDir CedarBackup2.config.OptionsConfig._setBackupUser CedarBackup2.config.OptionsConfig-class.html#_setBackupUser CedarBackup2.config.OptionsConfig.__str__ CedarBackup2.config.OptionsConfig-class.html#__str__ CedarBackup2.config.OptionsConfig.backupUser CedarBackup2.config.OptionsConfig-class.html#backupUser CedarBackup2.config.OptionsConfig._getStartingDay CedarBackup2.config.OptionsConfig-class.html#_getStartingDay CedarBackup2.config.OptionsConfig.managedActions CedarBackup2.config.OptionsConfig-class.html#managedActions CedarBackup2.config.OptionsConfig.replaceOverride CedarBackup2.config.OptionsConfig-class.html#replaceOverride CedarBackup2.config.OptionsConfig._getBackupUser CedarBackup2.config.OptionsConfig-class.html#_getBackupUser CedarBackup2.config.OptionsConfig.__init__ CedarBackup2.config.OptionsConfig-class.html#__init__ CedarBackup2.config.OptionsConfig._setBackupGroup CedarBackup2.config.OptionsConfig-class.html#_setBackupGroup CedarBackup2.config.OptionsConfig._setCbackCommand CedarBackup2.config.OptionsConfig-class.html#_setCbackCommand CedarBackup2.config.OptionsConfig._getCbackCommand CedarBackup2.config.OptionsConfig-class.html#_getCbackCommand CedarBackup2.config.OptionsConfig.workingDir CedarBackup2.config.OptionsConfig-class.html#workingDir CedarBackup2.config.OptionsConfig.__cmp__ CedarBackup2.config.OptionsConfig-class.html#__cmp__ CedarBackup2.config.OptionsConfig.hooks CedarBackup2.config.OptionsConfig-class.html#hooks CedarBackup2.config.OptionsConfig.backupGroup CedarBackup2.config.OptionsConfig-class.html#backupGroup CedarBackup2.config.OptionsConfig.startingDay CedarBackup2.config.OptionsConfig-class.html#startingDay CedarBackup2.config.OptionsConfig._getHooks CedarBackup2.config.OptionsConfig-class.html#_getHooks CedarBackup2.config.OptionsConfig._setWorkingDir CedarBackup2.config.OptionsConfig-class.html#_setWorkingDir CedarBackup2.config.OptionsConfig._getBackupGroup CedarBackup2.config.OptionsConfig-class.html#_getBackupGroup CedarBackup2.config.OptionsConfig.rshCommand CedarBackup2.config.OptionsConfig-class.html#rshCommand CedarBackup2.config.OptionsConfig.addOverride CedarBackup2.config.OptionsConfig-class.html#addOverride CedarBackup2.config.OptionsConfig._setManagedActions CedarBackup2.config.OptionsConfig-class.html#_setManagedActions CedarBackup2.config.OptionsConfig.rcpCommand CedarBackup2.config.OptionsConfig-class.html#rcpCommand CedarBackup2.config.OptionsConfig._setRcpCommand CedarBackup2.config.OptionsConfig-class.html#_setRcpCommand CedarBackup2.config.OptionsConfig.cbackCommand CedarBackup2.config.OptionsConfig-class.html#cbackCommand CedarBackup2.config.OptionsConfig.overrides CedarBackup2.config.OptionsConfig-class.html#overrides CedarBackup2.config.OptionsConfig._setOverrides CedarBackup2.config.OptionsConfig-class.html#_setOverrides CedarBackup2.config.OptionsConfig._setHooks CedarBackup2.config.OptionsConfig-class.html#_setHooks CedarBackup2.config.OptionsConfig._getManagedActions CedarBackup2.config.OptionsConfig-class.html#_getManagedActions CedarBackup2.config.OptionsConfig._getOverrides CedarBackup2.config.OptionsConfig-class.html#_getOverrides CedarBackup2.config.OptionsConfig.__repr__ CedarBackup2.config.OptionsConfig-class.html#__repr__ CedarBackup2.config.OptionsConfig._getRshCommand CedarBackup2.config.OptionsConfig-class.html#_getRshCommand CedarBackup2.config.OptionsConfig._setRshCommand CedarBackup2.config.OptionsConfig-class.html#_setRshCommand CedarBackup2.config.OptionsConfig._setStartingDay CedarBackup2.config.OptionsConfig-class.html#_setStartingDay CedarBackup2.config.PeersConfig CedarBackup2.config.PeersConfig-class.html CedarBackup2.config.PeersConfig.__str__ CedarBackup2.config.PeersConfig-class.html#__str__ CedarBackup2.config.PeersConfig._getRemotePeers CedarBackup2.config.PeersConfig-class.html#_getRemotePeers CedarBackup2.config.PeersConfig.localPeers CedarBackup2.config.PeersConfig-class.html#localPeers CedarBackup2.config.PeersConfig.__init__ CedarBackup2.config.PeersConfig-class.html#__init__ CedarBackup2.config.PeersConfig.hasPeers CedarBackup2.config.PeersConfig-class.html#hasPeers CedarBackup2.config.PeersConfig._setRemotePeers CedarBackup2.config.PeersConfig-class.html#_setRemotePeers CedarBackup2.config.PeersConfig.__cmp__ CedarBackup2.config.PeersConfig-class.html#__cmp__ CedarBackup2.config.PeersConfig._getLocalPeers CedarBackup2.config.PeersConfig-class.html#_getLocalPeers CedarBackup2.config.PeersConfig._setLocalPeers CedarBackup2.config.PeersConfig-class.html#_setLocalPeers CedarBackup2.config.PeersConfig.remotePeers CedarBackup2.config.PeersConfig-class.html#remotePeers CedarBackup2.config.PeersConfig.__repr__ CedarBackup2.config.PeersConfig-class.html#__repr__ CedarBackup2.config.PostActionHook CedarBackup2.config.PostActionHook-class.html CedarBackup2.config.ActionHook.__str__ CedarBackup2.config.ActionHook-class.html#__str__ CedarBackup2.config.ActionHook._getAction CedarBackup2.config.ActionHook-class.html#_getAction CedarBackup2.config.PostActionHook.__init__ CedarBackup2.config.PostActionHook-class.html#__init__ CedarBackup2.config.ActionHook.before CedarBackup2.config.ActionHook-class.html#before CedarBackup2.config.ActionHook._getBefore CedarBackup2.config.ActionHook-class.html#_getBefore CedarBackup2.config.ActionHook._setAction CedarBackup2.config.ActionHook-class.html#_setAction CedarBackup2.config.ActionHook.__cmp__ CedarBackup2.config.ActionHook-class.html#__cmp__ CedarBackup2.config.ActionHook._getAfter CedarBackup2.config.ActionHook-class.html#_getAfter CedarBackup2.config.ActionHook._getCommand CedarBackup2.config.ActionHook-class.html#_getCommand CedarBackup2.config.ActionHook.after CedarBackup2.config.ActionHook-class.html#after CedarBackup2.config.ActionHook._setCommand CedarBackup2.config.ActionHook-class.html#_setCommand CedarBackup2.config.ActionHook.command CedarBackup2.config.ActionHook-class.html#command CedarBackup2.config.PostActionHook.__repr__ CedarBackup2.config.PostActionHook-class.html#__repr__ CedarBackup2.config.ActionHook.action CedarBackup2.config.ActionHook-class.html#action CedarBackup2.config.PreActionHook CedarBackup2.config.PreActionHook-class.html CedarBackup2.config.ActionHook.__str__ CedarBackup2.config.ActionHook-class.html#__str__ CedarBackup2.config.ActionHook._getAction CedarBackup2.config.ActionHook-class.html#_getAction CedarBackup2.config.PreActionHook.__init__ CedarBackup2.config.PreActionHook-class.html#__init__ CedarBackup2.config.ActionHook.before CedarBackup2.config.ActionHook-class.html#before CedarBackup2.config.ActionHook._getBefore CedarBackup2.config.ActionHook-class.html#_getBefore CedarBackup2.config.ActionHook._setAction CedarBackup2.config.ActionHook-class.html#_setAction CedarBackup2.config.ActionHook.__cmp__ CedarBackup2.config.ActionHook-class.html#__cmp__ CedarBackup2.config.ActionHook._getAfter CedarBackup2.config.ActionHook-class.html#_getAfter CedarBackup2.config.ActionHook._getCommand CedarBackup2.config.ActionHook-class.html#_getCommand CedarBackup2.config.ActionHook.after CedarBackup2.config.ActionHook-class.html#after CedarBackup2.config.ActionHook._setCommand CedarBackup2.config.ActionHook-class.html#_setCommand CedarBackup2.config.ActionHook.command CedarBackup2.config.ActionHook-class.html#command CedarBackup2.config.PreActionHook.__repr__ CedarBackup2.config.PreActionHook-class.html#__repr__ CedarBackup2.config.ActionHook.action CedarBackup2.config.ActionHook-class.html#action CedarBackup2.config.PurgeConfig CedarBackup2.config.PurgeConfig-class.html CedarBackup2.config.PurgeConfig.__str__ CedarBackup2.config.PurgeConfig-class.html#__str__ CedarBackup2.config.PurgeConfig.__cmp__ CedarBackup2.config.PurgeConfig-class.html#__cmp__ CedarBackup2.config.PurgeConfig._setPurgeDirs CedarBackup2.config.PurgeConfig-class.html#_setPurgeDirs CedarBackup2.config.PurgeConfig.purgeDirs CedarBackup2.config.PurgeConfig-class.html#purgeDirs CedarBackup2.config.PurgeConfig.__repr__ CedarBackup2.config.PurgeConfig-class.html#__repr__ CedarBackup2.config.PurgeConfig.__init__ CedarBackup2.config.PurgeConfig-class.html#__init__ CedarBackup2.config.PurgeConfig._getPurgeDirs CedarBackup2.config.PurgeConfig-class.html#_getPurgeDirs CedarBackup2.config.PurgeDir CedarBackup2.config.PurgeDir-class.html CedarBackup2.config.PurgeDir._getRetainDays CedarBackup2.config.PurgeDir-class.html#_getRetainDays CedarBackup2.config.PurgeDir.__str__ CedarBackup2.config.PurgeDir-class.html#__str__ CedarBackup2.config.PurgeDir._getAbsolutePath CedarBackup2.config.PurgeDir-class.html#_getAbsolutePath CedarBackup2.config.PurgeDir.retainDays CedarBackup2.config.PurgeDir-class.html#retainDays CedarBackup2.config.PurgeDir._setRetainDays CedarBackup2.config.PurgeDir-class.html#_setRetainDays CedarBackup2.config.PurgeDir.absolutePath CedarBackup2.config.PurgeDir-class.html#absolutePath CedarBackup2.config.PurgeDir.__cmp__ CedarBackup2.config.PurgeDir-class.html#__cmp__ CedarBackup2.config.PurgeDir.__repr__ CedarBackup2.config.PurgeDir-class.html#__repr__ CedarBackup2.config.PurgeDir._setAbsolutePath CedarBackup2.config.PurgeDir-class.html#_setAbsolutePath CedarBackup2.config.PurgeDir.__init__ CedarBackup2.config.PurgeDir-class.html#__init__ CedarBackup2.config.ReferenceConfig CedarBackup2.config.ReferenceConfig-class.html CedarBackup2.config.ReferenceConfig._setAuthor CedarBackup2.config.ReferenceConfig-class.html#_setAuthor CedarBackup2.config.ReferenceConfig.__str__ CedarBackup2.config.ReferenceConfig-class.html#__str__ CedarBackup2.config.ReferenceConfig.__init__ CedarBackup2.config.ReferenceConfig-class.html#__init__ CedarBackup2.config.ReferenceConfig.generator CedarBackup2.config.ReferenceConfig-class.html#generator CedarBackup2.config.ReferenceConfig.author CedarBackup2.config.ReferenceConfig-class.html#author CedarBackup2.config.ReferenceConfig._getGenerator CedarBackup2.config.ReferenceConfig-class.html#_getGenerator CedarBackup2.config.ReferenceConfig.__cmp__ CedarBackup2.config.ReferenceConfig-class.html#__cmp__ CedarBackup2.config.ReferenceConfig.revision CedarBackup2.config.ReferenceConfig-class.html#revision CedarBackup2.config.ReferenceConfig.description CedarBackup2.config.ReferenceConfig-class.html#description CedarBackup2.config.ReferenceConfig._setGenerator CedarBackup2.config.ReferenceConfig-class.html#_setGenerator CedarBackup2.config.ReferenceConfig._setDescription CedarBackup2.config.ReferenceConfig-class.html#_setDescription CedarBackup2.config.ReferenceConfig._setRevision CedarBackup2.config.ReferenceConfig-class.html#_setRevision CedarBackup2.config.ReferenceConfig._getRevision CedarBackup2.config.ReferenceConfig-class.html#_getRevision CedarBackup2.config.ReferenceConfig._getAuthor CedarBackup2.config.ReferenceConfig-class.html#_getAuthor CedarBackup2.config.ReferenceConfig._getDescription CedarBackup2.config.ReferenceConfig-class.html#_getDescription CedarBackup2.config.ReferenceConfig.__repr__ CedarBackup2.config.ReferenceConfig-class.html#__repr__ CedarBackup2.config.RemotePeer CedarBackup2.config.RemotePeer-class.html CedarBackup2.config.RemotePeer._getRcpCommand CedarBackup2.config.RemotePeer-class.html#_getRcpCommand CedarBackup2.config.RemotePeer.managed CedarBackup2.config.RemotePeer-class.html#managed CedarBackup2.config.RemotePeer.__str__ CedarBackup2.config.RemotePeer-class.html#__str__ CedarBackup2.config.RemotePeer.cbackCommand CedarBackup2.config.RemotePeer-class.html#cbackCommand CedarBackup2.config.RemotePeer._setIgnoreFailureMode CedarBackup2.config.RemotePeer-class.html#_setIgnoreFailureMode CedarBackup2.config.RemotePeer.managedActions CedarBackup2.config.RemotePeer-class.html#managedActions CedarBackup2.config.RemotePeer._getName CedarBackup2.config.RemotePeer-class.html#_getName CedarBackup2.config.RemotePeer.__init__ CedarBackup2.config.RemotePeer-class.html#__init__ CedarBackup2.config.RemotePeer._setCbackCommand CedarBackup2.config.RemotePeer-class.html#_setCbackCommand CedarBackup2.config.RemotePeer._getCbackCommand CedarBackup2.config.RemotePeer-class.html#_getCbackCommand CedarBackup2.config.RemotePeer.remoteUser CedarBackup2.config.RemotePeer-class.html#remoteUser CedarBackup2.config.RemotePeer.__cmp__ CedarBackup2.config.RemotePeer-class.html#__cmp__ CedarBackup2.config.RemotePeer._getIgnoreFailureMode CedarBackup2.config.RemotePeer-class.html#_getIgnoreFailureMode CedarBackup2.config.RemotePeer.name CedarBackup2.config.RemotePeer-class.html#name CedarBackup2.config.RemotePeer.ignoreFailureMode CedarBackup2.config.RemotePeer-class.html#ignoreFailureMode CedarBackup2.config.RemotePeer._setManaged CedarBackup2.config.RemotePeer-class.html#_setManaged CedarBackup2.config.RemotePeer._setRemoteUser CedarBackup2.config.RemotePeer-class.html#_setRemoteUser CedarBackup2.config.RemotePeer.rshCommand CedarBackup2.config.RemotePeer-class.html#rshCommand CedarBackup2.config.RemotePeer._getManaged CedarBackup2.config.RemotePeer-class.html#_getManaged CedarBackup2.config.RemotePeer._getCollectDir CedarBackup2.config.RemotePeer-class.html#_getCollectDir CedarBackup2.config.RemotePeer._setManagedActions CedarBackup2.config.RemotePeer-class.html#_setManagedActions CedarBackup2.config.RemotePeer.rcpCommand CedarBackup2.config.RemotePeer-class.html#rcpCommand CedarBackup2.config.RemotePeer._setRcpCommand CedarBackup2.config.RemotePeer-class.html#_setRcpCommand CedarBackup2.config.RemotePeer.collectDir CedarBackup2.config.RemotePeer-class.html#collectDir CedarBackup2.config.RemotePeer._setCollectDir CedarBackup2.config.RemotePeer-class.html#_setCollectDir CedarBackup2.config.RemotePeer._getManagedActions CedarBackup2.config.RemotePeer-class.html#_getManagedActions CedarBackup2.config.RemotePeer._getRemoteUser CedarBackup2.config.RemotePeer-class.html#_getRemoteUser CedarBackup2.config.RemotePeer.__repr__ CedarBackup2.config.RemotePeer-class.html#__repr__ CedarBackup2.config.RemotePeer._setName CedarBackup2.config.RemotePeer-class.html#_setName CedarBackup2.config.RemotePeer._getRshCommand CedarBackup2.config.RemotePeer-class.html#_getRshCommand CedarBackup2.config.RemotePeer._setRshCommand CedarBackup2.config.RemotePeer-class.html#_setRshCommand CedarBackup2.config.StageConfig CedarBackup2.config.StageConfig-class.html CedarBackup2.config.StageConfig.__str__ CedarBackup2.config.StageConfig-class.html#__str__ CedarBackup2.config.StageConfig._getRemotePeers CedarBackup2.config.StageConfig-class.html#_getRemotePeers CedarBackup2.config.StageConfig.localPeers CedarBackup2.config.StageConfig-class.html#localPeers CedarBackup2.config.StageConfig.__init__ CedarBackup2.config.StageConfig-class.html#__init__ CedarBackup2.config.StageConfig.hasPeers CedarBackup2.config.StageConfig-class.html#hasPeers CedarBackup2.config.StageConfig._setRemotePeers CedarBackup2.config.StageConfig-class.html#_setRemotePeers CedarBackup2.config.StageConfig._getTargetDir CedarBackup2.config.StageConfig-class.html#_getTargetDir CedarBackup2.config.StageConfig.__cmp__ CedarBackup2.config.StageConfig-class.html#__cmp__ CedarBackup2.config.StageConfig._getLocalPeers CedarBackup2.config.StageConfig-class.html#_getLocalPeers CedarBackup2.config.StageConfig._setLocalPeers CedarBackup2.config.StageConfig-class.html#_setLocalPeers CedarBackup2.config.StageConfig.remotePeers CedarBackup2.config.StageConfig-class.html#remotePeers CedarBackup2.config.StageConfig.targetDir CedarBackup2.config.StageConfig-class.html#targetDir CedarBackup2.config.StageConfig.__repr__ CedarBackup2.config.StageConfig-class.html#__repr__ CedarBackup2.config.StageConfig._setTargetDir CedarBackup2.config.StageConfig-class.html#_setTargetDir CedarBackup2.config.StoreConfig CedarBackup2.config.StoreConfig-class.html CedarBackup2.config.StoreConfig.__str__ CedarBackup2.config.StoreConfig-class.html#__str__ CedarBackup2.config.StoreConfig._setEjectDelay CedarBackup2.config.StoreConfig-class.html#_setEjectDelay CedarBackup2.config.StoreConfig._getDevicePath CedarBackup2.config.StoreConfig-class.html#_getDevicePath CedarBackup2.config.StoreConfig._setDeviceScsiId CedarBackup2.config.StoreConfig-class.html#_setDeviceScsiId CedarBackup2.config.StoreConfig._setDevicePath CedarBackup2.config.StoreConfig-class.html#_setDevicePath CedarBackup2.config.StoreConfig._getDeviceScsiId CedarBackup2.config.StoreConfig-class.html#_getDeviceScsiId CedarBackup2.config.StoreConfig._setSourceDir CedarBackup2.config.StoreConfig-class.html#_setSourceDir CedarBackup2.config.StoreConfig.__init__ CedarBackup2.config.StoreConfig-class.html#__init__ CedarBackup2.config.StoreConfig.refreshMediaDelay CedarBackup2.config.StoreConfig-class.html#refreshMediaDelay CedarBackup2.config.StoreConfig.sourceDir CedarBackup2.config.StoreConfig-class.html#sourceDir CedarBackup2.config.StoreConfig._getCheckMedia CedarBackup2.config.StoreConfig-class.html#_getCheckMedia CedarBackup2.config.StoreConfig.mediaType CedarBackup2.config.StoreConfig-class.html#mediaType CedarBackup2.config.StoreConfig.__cmp__ CedarBackup2.config.StoreConfig-class.html#__cmp__ CedarBackup2.config.StoreConfig._setNoEject CedarBackup2.config.StoreConfig-class.html#_setNoEject CedarBackup2.config.StoreConfig.warnMidnite CedarBackup2.config.StoreConfig-class.html#warnMidnite CedarBackup2.config.StoreConfig._setWarnMidnite CedarBackup2.config.StoreConfig-class.html#_setWarnMidnite CedarBackup2.config.StoreConfig.deviceType CedarBackup2.config.StoreConfig-class.html#deviceType CedarBackup2.config.StoreConfig.driveSpeed CedarBackup2.config.StoreConfig-class.html#driveSpeed CedarBackup2.config.StoreConfig._getMediaType CedarBackup2.config.StoreConfig-class.html#_getMediaType CedarBackup2.config.StoreConfig._getDeviceType CedarBackup2.config.StoreConfig-class.html#_getDeviceType CedarBackup2.config.StoreConfig.noEject CedarBackup2.config.StoreConfig-class.html#noEject CedarBackup2.config.StoreConfig._getBlankBehavior CedarBackup2.config.StoreConfig-class.html#_getBlankBehavior CedarBackup2.config.StoreConfig._getWarnMidnite CedarBackup2.config.StoreConfig-class.html#_getWarnMidnite CedarBackup2.config.StoreConfig._setMediaType CedarBackup2.config.StoreConfig-class.html#_setMediaType CedarBackup2.config.StoreConfig.deviceScsiId CedarBackup2.config.StoreConfig-class.html#deviceScsiId CedarBackup2.config.StoreConfig.blankBehavior CedarBackup2.config.StoreConfig-class.html#blankBehavior CedarBackup2.config.StoreConfig._getDriveSpeed CedarBackup2.config.StoreConfig-class.html#_getDriveSpeed CedarBackup2.config.StoreConfig._setCheckData CedarBackup2.config.StoreConfig-class.html#_setCheckData CedarBackup2.config.StoreConfig._setRefreshMediaDelay CedarBackup2.config.StoreConfig-class.html#_setRefreshMediaDelay CedarBackup2.config.StoreConfig.devicePath CedarBackup2.config.StoreConfig-class.html#devicePath CedarBackup2.config.StoreConfig.checkData CedarBackup2.config.StoreConfig-class.html#checkData CedarBackup2.config.StoreConfig._setDriveSpeed CedarBackup2.config.StoreConfig-class.html#_setDriveSpeed CedarBackup2.config.StoreConfig._setDeviceType CedarBackup2.config.StoreConfig-class.html#_setDeviceType CedarBackup2.config.StoreConfig.checkMedia CedarBackup2.config.StoreConfig-class.html#checkMedia CedarBackup2.config.StoreConfig._getEjectDelay CedarBackup2.config.StoreConfig-class.html#_getEjectDelay CedarBackup2.config.StoreConfig._getRefreshMediaDelay CedarBackup2.config.StoreConfig-class.html#_getRefreshMediaDelay CedarBackup2.config.StoreConfig._getNoEject CedarBackup2.config.StoreConfig-class.html#_getNoEject CedarBackup2.config.StoreConfig._getSourceDir CedarBackup2.config.StoreConfig-class.html#_getSourceDir CedarBackup2.config.StoreConfig._setCheckMedia CedarBackup2.config.StoreConfig-class.html#_setCheckMedia CedarBackup2.config.StoreConfig.__repr__ CedarBackup2.config.StoreConfig-class.html#__repr__ CedarBackup2.config.StoreConfig.ejectDelay CedarBackup2.config.StoreConfig-class.html#ejectDelay CedarBackup2.config.StoreConfig._setBlankBehavior CedarBackup2.config.StoreConfig-class.html#_setBlankBehavior CedarBackup2.config.StoreConfig._getCheckData CedarBackup2.config.StoreConfig-class.html#_getCheckData CedarBackup2.extend.amazons3.AmazonS3Config CedarBackup2.extend.amazons3.AmazonS3Config-class.html CedarBackup2.extend.amazons3.AmazonS3Config.__str__ CedarBackup2.extend.amazons3.AmazonS3Config-class.html#__str__ CedarBackup2.extend.amazons3.AmazonS3Config.encryptCommand CedarBackup2.extend.amazons3.AmazonS3Config-class.html#encryptCommand CedarBackup2.extend.amazons3.AmazonS3Config._getS3Bucket CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_getS3Bucket CedarBackup2.extend.amazons3.AmazonS3Config._setIncrementalBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_setIncrementalBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config._getFullBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_getFullBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config.__init__ CedarBackup2.extend.amazons3.AmazonS3Config-class.html#__init__ CedarBackup2.extend.amazons3.AmazonS3Config._getEncryptCommand CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_getEncryptCommand CedarBackup2.extend.amazons3.AmazonS3Config.__cmp__ CedarBackup2.extend.amazons3.AmazonS3Config-class.html#__cmp__ CedarBackup2.extend.amazons3.AmazonS3Config.s3Bucket CedarBackup2.extend.amazons3.AmazonS3Config-class.html#s3Bucket CedarBackup2.extend.amazons3.AmazonS3Config.warnMidnite CedarBackup2.extend.amazons3.AmazonS3Config-class.html#warnMidnite CedarBackup2.extend.amazons3.AmazonS3Config._setWarnMidnite CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_setWarnMidnite CedarBackup2.extend.amazons3.AmazonS3Config._getWarnMidnite CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_getWarnMidnite CedarBackup2.extend.amazons3.AmazonS3Config._getIncrementalBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_getIncrementalBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config._setEncryptCommand CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_setEncryptCommand CedarBackup2.extend.amazons3.AmazonS3Config._setS3Bucket CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_setS3Bucket CedarBackup2.extend.amazons3.AmazonS3Config.incrementalBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config-class.html#incrementalBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config.fullBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config-class.html#fullBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config.__repr__ CedarBackup2.extend.amazons3.AmazonS3Config-class.html#__repr__ CedarBackup2.extend.amazons3.AmazonS3Config._setFullBackupSizeLimit CedarBackup2.extend.amazons3.AmazonS3Config-class.html#_setFullBackupSizeLimit CedarBackup2.extend.amazons3.LocalConfig CedarBackup2.extend.amazons3.LocalConfig-class.html CedarBackup2.extend.amazons3.LocalConfig.__str__ CedarBackup2.extend.amazons3.LocalConfig-class.html#__str__ CedarBackup2.extend.amazons3.LocalConfig._parseXmlData CedarBackup2.extend.amazons3.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.amazons3.LocalConfig.__init__ CedarBackup2.extend.amazons3.LocalConfig-class.html#__init__ CedarBackup2.extend.amazons3.LocalConfig.__cmp__ CedarBackup2.extend.amazons3.LocalConfig-class.html#__cmp__ CedarBackup2.extend.amazons3.LocalConfig._getAmazonS3 CedarBackup2.extend.amazons3.LocalConfig-class.html#_getAmazonS3 CedarBackup2.extend.amazons3.LocalConfig._parseAmazonS3 CedarBackup2.extend.amazons3.LocalConfig-class.html#_parseAmazonS3 CedarBackup2.extend.amazons3.LocalConfig.addConfig CedarBackup2.extend.amazons3.LocalConfig-class.html#addConfig CedarBackup2.extend.amazons3.LocalConfig.amazons3 CedarBackup2.extend.amazons3.LocalConfig-class.html#amazons3 CedarBackup2.extend.amazons3.LocalConfig.validate CedarBackup2.extend.amazons3.LocalConfig-class.html#validate CedarBackup2.extend.amazons3.LocalConfig._setAmazonS3 CedarBackup2.extend.amazons3.LocalConfig-class.html#_setAmazonS3 CedarBackup2.extend.amazons3.LocalConfig.__repr__ CedarBackup2.extend.amazons3.LocalConfig-class.html#__repr__ CedarBackup2.extend.capacity.CapacityConfig CedarBackup2.extend.capacity.CapacityConfig-class.html CedarBackup2.extend.capacity.CapacityConfig._setMaxPercentage CedarBackup2.extend.capacity.CapacityConfig-class.html#_setMaxPercentage CedarBackup2.extend.capacity.CapacityConfig.__str__ CedarBackup2.extend.capacity.CapacityConfig-class.html#__str__ CedarBackup2.extend.capacity.CapacityConfig.__cmp__ CedarBackup2.extend.capacity.CapacityConfig-class.html#__cmp__ CedarBackup2.extend.capacity.CapacityConfig._getMaxPercentage CedarBackup2.extend.capacity.CapacityConfig-class.html#_getMaxPercentage CedarBackup2.extend.capacity.CapacityConfig.__repr__ CedarBackup2.extend.capacity.CapacityConfig-class.html#__repr__ CedarBackup2.extend.capacity.CapacityConfig.maxPercentage CedarBackup2.extend.capacity.CapacityConfig-class.html#maxPercentage CedarBackup2.extend.capacity.CapacityConfig._setMinBytes CedarBackup2.extend.capacity.CapacityConfig-class.html#_setMinBytes CedarBackup2.extend.capacity.CapacityConfig._getMinBytes CedarBackup2.extend.capacity.CapacityConfig-class.html#_getMinBytes CedarBackup2.extend.capacity.CapacityConfig.minBytes CedarBackup2.extend.capacity.CapacityConfig-class.html#minBytes CedarBackup2.extend.capacity.CapacityConfig.__init__ CedarBackup2.extend.capacity.CapacityConfig-class.html#__init__ CedarBackup2.extend.capacity.LocalConfig CedarBackup2.extend.capacity.LocalConfig-class.html CedarBackup2.extend.capacity.LocalConfig.__str__ CedarBackup2.extend.capacity.LocalConfig-class.html#__str__ CedarBackup2.extend.capacity.LocalConfig._addPercentageQuantity CedarBackup2.extend.capacity.LocalConfig-class.html#_addPercentageQuantity CedarBackup2.extend.capacity.LocalConfig._parseXmlData CedarBackup2.extend.capacity.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.capacity.LocalConfig.__init__ CedarBackup2.extend.capacity.LocalConfig-class.html#__init__ CedarBackup2.extend.capacity.LocalConfig.capacity CedarBackup2.extend.capacity.LocalConfig-class.html#capacity CedarBackup2.extend.capacity.LocalConfig.__cmp__ CedarBackup2.extend.capacity.LocalConfig-class.html#__cmp__ CedarBackup2.extend.capacity.LocalConfig._readPercentageQuantity CedarBackup2.extend.capacity.LocalConfig-class.html#_readPercentageQuantity CedarBackup2.extend.capacity.LocalConfig._parseCapacity CedarBackup2.extend.capacity.LocalConfig-class.html#_parseCapacity CedarBackup2.extend.capacity.LocalConfig._getCapacity CedarBackup2.extend.capacity.LocalConfig-class.html#_getCapacity CedarBackup2.extend.capacity.LocalConfig.addConfig CedarBackup2.extend.capacity.LocalConfig-class.html#addConfig CedarBackup2.extend.capacity.LocalConfig.validate CedarBackup2.extend.capacity.LocalConfig-class.html#validate CedarBackup2.extend.capacity.LocalConfig.__repr__ CedarBackup2.extend.capacity.LocalConfig-class.html#__repr__ CedarBackup2.extend.capacity.LocalConfig._setCapacity CedarBackup2.extend.capacity.LocalConfig-class.html#_setCapacity CedarBackup2.extend.capacity.PercentageQuantity CedarBackup2.extend.capacity.PercentageQuantity-class.html CedarBackup2.extend.capacity.PercentageQuantity._setQuantity CedarBackup2.extend.capacity.PercentageQuantity-class.html#_setQuantity CedarBackup2.extend.capacity.PercentageQuantity._getPercentage CedarBackup2.extend.capacity.PercentageQuantity-class.html#_getPercentage CedarBackup2.extend.capacity.PercentageQuantity.__str__ CedarBackup2.extend.capacity.PercentageQuantity-class.html#__str__ CedarBackup2.extend.capacity.PercentageQuantity.__cmp__ CedarBackup2.extend.capacity.PercentageQuantity-class.html#__cmp__ CedarBackup2.extend.capacity.PercentageQuantity.__repr__ CedarBackup2.extend.capacity.PercentageQuantity-class.html#__repr__ CedarBackup2.extend.capacity.PercentageQuantity._getQuantity CedarBackup2.extend.capacity.PercentageQuantity-class.html#_getQuantity CedarBackup2.extend.capacity.PercentageQuantity.percentage CedarBackup2.extend.capacity.PercentageQuantity-class.html#percentage CedarBackup2.extend.capacity.PercentageQuantity.__init__ CedarBackup2.extend.capacity.PercentageQuantity-class.html#__init__ CedarBackup2.extend.capacity.PercentageQuantity.quantity CedarBackup2.extend.capacity.PercentageQuantity-class.html#quantity CedarBackup2.extend.encrypt.EncryptConfig CedarBackup2.extend.encrypt.EncryptConfig-class.html CedarBackup2.extend.encrypt.EncryptConfig._getEncryptMode CedarBackup2.extend.encrypt.EncryptConfig-class.html#_getEncryptMode CedarBackup2.extend.encrypt.EncryptConfig.encryptMode CedarBackup2.extend.encrypt.EncryptConfig-class.html#encryptMode CedarBackup2.extend.encrypt.EncryptConfig.__str__ CedarBackup2.extend.encrypt.EncryptConfig-class.html#__str__ CedarBackup2.extend.encrypt.EncryptConfig.__cmp__ CedarBackup2.extend.encrypt.EncryptConfig-class.html#__cmp__ CedarBackup2.extend.encrypt.EncryptConfig._setEncryptTarget CedarBackup2.extend.encrypt.EncryptConfig-class.html#_setEncryptTarget CedarBackup2.extend.encrypt.EncryptConfig.__init__ CedarBackup2.extend.encrypt.EncryptConfig-class.html#__init__ CedarBackup2.extend.encrypt.EncryptConfig.encryptTarget CedarBackup2.extend.encrypt.EncryptConfig-class.html#encryptTarget CedarBackup2.extend.encrypt.EncryptConfig._setEncryptMode CedarBackup2.extend.encrypt.EncryptConfig-class.html#_setEncryptMode CedarBackup2.extend.encrypt.EncryptConfig._getEncryptTarget CedarBackup2.extend.encrypt.EncryptConfig-class.html#_getEncryptTarget CedarBackup2.extend.encrypt.EncryptConfig.__repr__ CedarBackup2.extend.encrypt.EncryptConfig-class.html#__repr__ CedarBackup2.extend.encrypt.LocalConfig CedarBackup2.extend.encrypt.LocalConfig-class.html CedarBackup2.extend.encrypt.LocalConfig.__str__ CedarBackup2.extend.encrypt.LocalConfig-class.html#__str__ CedarBackup2.extend.encrypt.LocalConfig._parseXmlData CedarBackup2.extend.encrypt.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.encrypt.LocalConfig.__init__ CedarBackup2.extend.encrypt.LocalConfig-class.html#__init__ CedarBackup2.extend.encrypt.LocalConfig._parseEncrypt CedarBackup2.extend.encrypt.LocalConfig-class.html#_parseEncrypt CedarBackup2.extend.encrypt.LocalConfig.encrypt CedarBackup2.extend.encrypt.LocalConfig-class.html#encrypt CedarBackup2.extend.encrypt.LocalConfig._getEncrypt CedarBackup2.extend.encrypt.LocalConfig-class.html#_getEncrypt CedarBackup2.extend.encrypt.LocalConfig.__cmp__ CedarBackup2.extend.encrypt.LocalConfig-class.html#__cmp__ CedarBackup2.extend.encrypt.LocalConfig.addConfig CedarBackup2.extend.encrypt.LocalConfig-class.html#addConfig CedarBackup2.extend.encrypt.LocalConfig.validate CedarBackup2.extend.encrypt.LocalConfig-class.html#validate CedarBackup2.extend.encrypt.LocalConfig._setEncrypt CedarBackup2.extend.encrypt.LocalConfig-class.html#_setEncrypt CedarBackup2.extend.encrypt.LocalConfig.__repr__ CedarBackup2.extend.encrypt.LocalConfig-class.html#__repr__ CedarBackup2.extend.mbox.LocalConfig CedarBackup2.extend.mbox.LocalConfig-class.html CedarBackup2.extend.mbox.LocalConfig.__str__ CedarBackup2.extend.mbox.LocalConfig-class.html#__str__ CedarBackup2.extend.mbox.LocalConfig._parseXmlData CedarBackup2.extend.mbox.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.mbox.LocalConfig.__init__ CedarBackup2.extend.mbox.LocalConfig-class.html#__init__ CedarBackup2.extend.mbox.LocalConfig.__cmp__ CedarBackup2.extend.mbox.LocalConfig-class.html#__cmp__ CedarBackup2.extend.mbox.LocalConfig.addConfig CedarBackup2.extend.mbox.LocalConfig-class.html#addConfig CedarBackup2.extend.mbox.LocalConfig.validate CedarBackup2.extend.mbox.LocalConfig-class.html#validate CedarBackup2.extend.mbox.LocalConfig._addMboxDir CedarBackup2.extend.mbox.LocalConfig-class.html#_addMboxDir CedarBackup2.extend.mbox.LocalConfig._parseMboxFiles CedarBackup2.extend.mbox.LocalConfig-class.html#_parseMboxFiles CedarBackup2.extend.mbox.LocalConfig._getMbox CedarBackup2.extend.mbox.LocalConfig-class.html#_getMbox CedarBackup2.extend.mbox.LocalConfig._addMboxFile CedarBackup2.extend.mbox.LocalConfig-class.html#_addMboxFile CedarBackup2.extend.mbox.LocalConfig._parseExclusions CedarBackup2.extend.mbox.LocalConfig-class.html#_parseExclusions CedarBackup2.extend.mbox.LocalConfig._setMbox CedarBackup2.extend.mbox.LocalConfig-class.html#_setMbox CedarBackup2.extend.mbox.LocalConfig._parseMbox CedarBackup2.extend.mbox.LocalConfig-class.html#_parseMbox CedarBackup2.extend.mbox.LocalConfig.__repr__ CedarBackup2.extend.mbox.LocalConfig-class.html#__repr__ CedarBackup2.extend.mbox.LocalConfig.mbox CedarBackup2.extend.mbox.LocalConfig-class.html#mbox CedarBackup2.extend.mbox.LocalConfig._parseMboxDirs CedarBackup2.extend.mbox.LocalConfig-class.html#_parseMboxDirs CedarBackup2.extend.mbox.MboxConfig CedarBackup2.extend.mbox.MboxConfig-class.html CedarBackup2.extend.mbox.MboxConfig._getCollectMode CedarBackup2.extend.mbox.MboxConfig-class.html#_getCollectMode CedarBackup2.extend.mbox.MboxConfig.mboxFiles CedarBackup2.extend.mbox.MboxConfig-class.html#mboxFiles CedarBackup2.extend.mbox.MboxConfig.__str__ CedarBackup2.extend.mbox.MboxConfig-class.html#__str__ CedarBackup2.extend.mbox.MboxConfig.__init__ CedarBackup2.extend.mbox.MboxConfig-class.html#__init__ CedarBackup2.extend.mbox.MboxConfig._setCollectMode CedarBackup2.extend.mbox.MboxConfig-class.html#_setCollectMode CedarBackup2.extend.mbox.MboxConfig._getMboxFiles CedarBackup2.extend.mbox.MboxConfig-class.html#_getMboxFiles CedarBackup2.extend.mbox.MboxConfig.__cmp__ CedarBackup2.extend.mbox.MboxConfig-class.html#__cmp__ CedarBackup2.extend.mbox.MboxConfig._setMboxFiles CedarBackup2.extend.mbox.MboxConfig-class.html#_setMboxFiles CedarBackup2.extend.mbox.MboxConfig.compressMode CedarBackup2.extend.mbox.MboxConfig-class.html#compressMode CedarBackup2.extend.mbox.MboxConfig._getMboxDirs CedarBackup2.extend.mbox.MboxConfig-class.html#_getMboxDirs CedarBackup2.extend.mbox.MboxConfig._setCompressMode CedarBackup2.extend.mbox.MboxConfig-class.html#_setCompressMode CedarBackup2.extend.mbox.MboxConfig._setMboxDirs CedarBackup2.extend.mbox.MboxConfig-class.html#_setMboxDirs CedarBackup2.extend.mbox.MboxConfig.mboxDirs CedarBackup2.extend.mbox.MboxConfig-class.html#mboxDirs CedarBackup2.extend.mbox.MboxConfig.collectMode CedarBackup2.extend.mbox.MboxConfig-class.html#collectMode CedarBackup2.extend.mbox.MboxConfig._getCompressMode CedarBackup2.extend.mbox.MboxConfig-class.html#_getCompressMode CedarBackup2.extend.mbox.MboxConfig.__repr__ CedarBackup2.extend.mbox.MboxConfig-class.html#__repr__ CedarBackup2.extend.mbox.MboxDir CedarBackup2.extend.mbox.MboxDir-class.html CedarBackup2.extend.mbox.MboxDir._getCollectMode CedarBackup2.extend.mbox.MboxDir-class.html#_getCollectMode CedarBackup2.extend.mbox.MboxDir._getCompressMode CedarBackup2.extend.mbox.MboxDir-class.html#_getCompressMode CedarBackup2.extend.mbox.MboxDir.__str__ CedarBackup2.extend.mbox.MboxDir-class.html#__str__ CedarBackup2.extend.mbox.MboxDir._getAbsolutePath CedarBackup2.extend.mbox.MboxDir-class.html#_getAbsolutePath CedarBackup2.extend.mbox.MboxDir._setExcludePatterns CedarBackup2.extend.mbox.MboxDir-class.html#_setExcludePatterns CedarBackup2.extend.mbox.MboxDir.__init__ CedarBackup2.extend.mbox.MboxDir-class.html#__init__ CedarBackup2.extend.mbox.MboxDir._setCollectMode CedarBackup2.extend.mbox.MboxDir-class.html#_setCollectMode CedarBackup2.extend.mbox.MboxDir.absolutePath CedarBackup2.extend.mbox.MboxDir-class.html#absolutePath CedarBackup2.extend.mbox.MboxDir.__cmp__ CedarBackup2.extend.mbox.MboxDir-class.html#__cmp__ CedarBackup2.extend.mbox.MboxDir.relativeExcludePaths CedarBackup2.extend.mbox.MboxDir-class.html#relativeExcludePaths CedarBackup2.extend.mbox.MboxDir.compressMode CedarBackup2.extend.mbox.MboxDir-class.html#compressMode CedarBackup2.extend.mbox.MboxDir._getRelativeExcludePaths CedarBackup2.extend.mbox.MboxDir-class.html#_getRelativeExcludePaths CedarBackup2.extend.mbox.MboxDir._setCompressMode CedarBackup2.extend.mbox.MboxDir-class.html#_setCompressMode CedarBackup2.extend.mbox.MboxDir._setRelativeExcludePaths CedarBackup2.extend.mbox.MboxDir-class.html#_setRelativeExcludePaths CedarBackup2.extend.mbox.MboxDir.collectMode CedarBackup2.extend.mbox.MboxDir-class.html#collectMode CedarBackup2.extend.mbox.MboxDir._getExcludePatterns CedarBackup2.extend.mbox.MboxDir-class.html#_getExcludePatterns CedarBackup2.extend.mbox.MboxDir.excludePatterns CedarBackup2.extend.mbox.MboxDir-class.html#excludePatterns CedarBackup2.extend.mbox.MboxDir._setAbsolutePath CedarBackup2.extend.mbox.MboxDir-class.html#_setAbsolutePath CedarBackup2.extend.mbox.MboxDir.__repr__ CedarBackup2.extend.mbox.MboxDir-class.html#__repr__ CedarBackup2.extend.mbox.MboxFile CedarBackup2.extend.mbox.MboxFile-class.html CedarBackup2.extend.mbox.MboxFile._getCollectMode CedarBackup2.extend.mbox.MboxFile-class.html#_getCollectMode CedarBackup2.extend.mbox.MboxFile.__str__ CedarBackup2.extend.mbox.MboxFile-class.html#__str__ CedarBackup2.extend.mbox.MboxFile._getAbsolutePath CedarBackup2.extend.mbox.MboxFile-class.html#_getAbsolutePath CedarBackup2.extend.mbox.MboxFile.__init__ CedarBackup2.extend.mbox.MboxFile-class.html#__init__ CedarBackup2.extend.mbox.MboxFile._setCollectMode CedarBackup2.extend.mbox.MboxFile-class.html#_setCollectMode CedarBackup2.extend.mbox.MboxFile.absolutePath CedarBackup2.extend.mbox.MboxFile-class.html#absolutePath CedarBackup2.extend.mbox.MboxFile.__cmp__ CedarBackup2.extend.mbox.MboxFile-class.html#__cmp__ CedarBackup2.extend.mbox.MboxFile.compressMode CedarBackup2.extend.mbox.MboxFile-class.html#compressMode CedarBackup2.extend.mbox.MboxFile._setCompressMode CedarBackup2.extend.mbox.MboxFile-class.html#_setCompressMode CedarBackup2.extend.mbox.MboxFile.collectMode CedarBackup2.extend.mbox.MboxFile-class.html#collectMode CedarBackup2.extend.mbox.MboxFile._getCompressMode CedarBackup2.extend.mbox.MboxFile-class.html#_getCompressMode CedarBackup2.extend.mbox.MboxFile._setAbsolutePath CedarBackup2.extend.mbox.MboxFile-class.html#_setAbsolutePath CedarBackup2.extend.mbox.MboxFile.__repr__ CedarBackup2.extend.mbox.MboxFile-class.html#__repr__ CedarBackup2.extend.mysql.LocalConfig CedarBackup2.extend.mysql.LocalConfig-class.html CedarBackup2.extend.mysql.LocalConfig.__str__ CedarBackup2.extend.mysql.LocalConfig-class.html#__str__ CedarBackup2.extend.mysql.LocalConfig.mysql CedarBackup2.extend.mysql.LocalConfig-class.html#mysql CedarBackup2.extend.mysql.LocalConfig._parseMysql CedarBackup2.extend.mysql.LocalConfig-class.html#_parseMysql CedarBackup2.extend.mysql.LocalConfig.__init__ CedarBackup2.extend.mysql.LocalConfig-class.html#__init__ CedarBackup2.extend.mysql.LocalConfig.__cmp__ CedarBackup2.extend.mysql.LocalConfig-class.html#__cmp__ CedarBackup2.extend.mysql.LocalConfig._setMysql CedarBackup2.extend.mysql.LocalConfig-class.html#_setMysql CedarBackup2.extend.mysql.LocalConfig._parseXmlData CedarBackup2.extend.mysql.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.mysql.LocalConfig._getMysql CedarBackup2.extend.mysql.LocalConfig-class.html#_getMysql CedarBackup2.extend.mysql.LocalConfig.addConfig CedarBackup2.extend.mysql.LocalConfig-class.html#addConfig CedarBackup2.extend.mysql.LocalConfig.validate CedarBackup2.extend.mysql.LocalConfig-class.html#validate CedarBackup2.extend.mysql.LocalConfig.__repr__ CedarBackup2.extend.mysql.LocalConfig-class.html#__repr__ CedarBackup2.extend.mysql.MysqlConfig CedarBackup2.extend.mysql.MysqlConfig-class.html CedarBackup2.extend.mysql.MysqlConfig.all CedarBackup2.extend.mysql.MysqlConfig-class.html#all CedarBackup2.extend.mysql.MysqlConfig.__str__ CedarBackup2.extend.mysql.MysqlConfig-class.html#__str__ CedarBackup2.extend.mysql.MysqlConfig._setAll CedarBackup2.extend.mysql.MysqlConfig-class.html#_setAll CedarBackup2.extend.mysql.MysqlConfig.__init__ CedarBackup2.extend.mysql.MysqlConfig-class.html#__init__ CedarBackup2.extend.mysql.MysqlConfig._setDatabases CedarBackup2.extend.mysql.MysqlConfig-class.html#_setDatabases CedarBackup2.extend.mysql.MysqlConfig._getAll CedarBackup2.extend.mysql.MysqlConfig-class.html#_getAll CedarBackup2.extend.mysql.MysqlConfig.__cmp__ CedarBackup2.extend.mysql.MysqlConfig-class.html#__cmp__ CedarBackup2.extend.mysql.MysqlConfig._setPassword CedarBackup2.extend.mysql.MysqlConfig-class.html#_setPassword CedarBackup2.extend.mysql.MysqlConfig._getUser CedarBackup2.extend.mysql.MysqlConfig-class.html#_getUser CedarBackup2.extend.mysql.MysqlConfig._setUser CedarBackup2.extend.mysql.MysqlConfig-class.html#_setUser CedarBackup2.extend.mysql.MysqlConfig.compressMode CedarBackup2.extend.mysql.MysqlConfig-class.html#compressMode CedarBackup2.extend.mysql.MysqlConfig._getPassword CedarBackup2.extend.mysql.MysqlConfig-class.html#_getPassword CedarBackup2.extend.mysql.MysqlConfig.user CedarBackup2.extend.mysql.MysqlConfig-class.html#user CedarBackup2.extend.mysql.MysqlConfig._setCompressMode CedarBackup2.extend.mysql.MysqlConfig-class.html#_setCompressMode CedarBackup2.extend.mysql.MysqlConfig.password CedarBackup2.extend.mysql.MysqlConfig-class.html#password CedarBackup2.extend.mysql.MysqlConfig._getCompressMode CedarBackup2.extend.mysql.MysqlConfig-class.html#_getCompressMode CedarBackup2.extend.mysql.MysqlConfig._getDatabases CedarBackup2.extend.mysql.MysqlConfig-class.html#_getDatabases CedarBackup2.extend.mysql.MysqlConfig.__repr__ CedarBackup2.extend.mysql.MysqlConfig-class.html#__repr__ CedarBackup2.extend.mysql.MysqlConfig.databases CedarBackup2.extend.mysql.MysqlConfig-class.html#databases CedarBackup2.extend.postgresql.LocalConfig CedarBackup2.extend.postgresql.LocalConfig-class.html CedarBackup2.extend.postgresql.LocalConfig.__str__ CedarBackup2.extend.postgresql.LocalConfig-class.html#__str__ CedarBackup2.extend.postgresql.LocalConfig._parseXmlData CedarBackup2.extend.postgresql.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.postgresql.LocalConfig.__init__ CedarBackup2.extend.postgresql.LocalConfig-class.html#__init__ CedarBackup2.extend.postgresql.LocalConfig._setPostgresql CedarBackup2.extend.postgresql.LocalConfig-class.html#_setPostgresql CedarBackup2.extend.postgresql.LocalConfig.__cmp__ CedarBackup2.extend.postgresql.LocalConfig-class.html#__cmp__ CedarBackup2.extend.postgresql.LocalConfig._parsePostgresql CedarBackup2.extend.postgresql.LocalConfig-class.html#_parsePostgresql CedarBackup2.extend.postgresql.LocalConfig.addConfig CedarBackup2.extend.postgresql.LocalConfig-class.html#addConfig CedarBackup2.extend.postgresql.LocalConfig.validate CedarBackup2.extend.postgresql.LocalConfig-class.html#validate CedarBackup2.extend.postgresql.LocalConfig.postgresql CedarBackup2.extend.postgresql.LocalConfig-class.html#postgresql CedarBackup2.extend.postgresql.LocalConfig._getPostgresql CedarBackup2.extend.postgresql.LocalConfig-class.html#_getPostgresql CedarBackup2.extend.postgresql.LocalConfig.__repr__ CedarBackup2.extend.postgresql.LocalConfig-class.html#__repr__ CedarBackup2.extend.postgresql.PostgresqlConfig CedarBackup2.extend.postgresql.PostgresqlConfig-class.html CedarBackup2.extend.postgresql.PostgresqlConfig.all CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#all CedarBackup2.extend.postgresql.PostgresqlConfig.__str__ CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#__str__ CedarBackup2.extend.postgresql.PostgresqlConfig._setAll CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_setAll CedarBackup2.extend.postgresql.PostgresqlConfig.__init__ CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#__init__ CedarBackup2.extend.postgresql.PostgresqlConfig._setDatabases CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_setDatabases CedarBackup2.extend.postgresql.PostgresqlConfig._getAll CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_getAll CedarBackup2.extend.postgresql.PostgresqlConfig.__cmp__ CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#__cmp__ CedarBackup2.extend.postgresql.PostgresqlConfig._getUser CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_getUser CedarBackup2.extend.postgresql.PostgresqlConfig._setUser CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_setUser CedarBackup2.extend.postgresql.PostgresqlConfig.compressMode CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#compressMode CedarBackup2.extend.postgresql.PostgresqlConfig.user CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#user CedarBackup2.extend.postgresql.PostgresqlConfig._setCompressMode CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_setCompressMode CedarBackup2.extend.postgresql.PostgresqlConfig._getCompressMode CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_getCompressMode CedarBackup2.extend.postgresql.PostgresqlConfig._getDatabases CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#_getDatabases CedarBackup2.extend.postgresql.PostgresqlConfig.__repr__ CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#__repr__ CedarBackup2.extend.postgresql.PostgresqlConfig.databases CedarBackup2.extend.postgresql.PostgresqlConfig-class.html#databases CedarBackup2.extend.split.LocalConfig CedarBackup2.extend.split.LocalConfig-class.html CedarBackup2.extend.split.LocalConfig.__str__ CedarBackup2.extend.split.LocalConfig-class.html#__str__ CedarBackup2.extend.split.LocalConfig._getSplit CedarBackup2.extend.split.LocalConfig-class.html#_getSplit CedarBackup2.extend.split.LocalConfig._parseXmlData CedarBackup2.extend.split.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.split.LocalConfig.__init__ CedarBackup2.extend.split.LocalConfig-class.html#__init__ CedarBackup2.extend.split.LocalConfig.__cmp__ CedarBackup2.extend.split.LocalConfig-class.html#__cmp__ CedarBackup2.extend.split.LocalConfig._setSplit CedarBackup2.extend.split.LocalConfig-class.html#_setSplit CedarBackup2.extend.split.LocalConfig.split CedarBackup2.extend.split.LocalConfig-class.html#split CedarBackup2.extend.split.LocalConfig.addConfig CedarBackup2.extend.split.LocalConfig-class.html#addConfig CedarBackup2.extend.split.LocalConfig.validate CedarBackup2.extend.split.LocalConfig-class.html#validate CedarBackup2.extend.split.LocalConfig.__repr__ CedarBackup2.extend.split.LocalConfig-class.html#__repr__ CedarBackup2.extend.split.LocalConfig._parseSplit CedarBackup2.extend.split.LocalConfig-class.html#_parseSplit CedarBackup2.extend.split.SplitConfig CedarBackup2.extend.split.SplitConfig-class.html CedarBackup2.extend.split.SplitConfig.splitSize CedarBackup2.extend.split.SplitConfig-class.html#splitSize CedarBackup2.extend.split.SplitConfig.__str__ CedarBackup2.extend.split.SplitConfig-class.html#__str__ CedarBackup2.extend.split.SplitConfig._setSplitSize CedarBackup2.extend.split.SplitConfig-class.html#_setSplitSize CedarBackup2.extend.split.SplitConfig._setSizeLimit CedarBackup2.extend.split.SplitConfig-class.html#_setSizeLimit CedarBackup2.extend.split.SplitConfig.__cmp__ CedarBackup2.extend.split.SplitConfig-class.html#__cmp__ CedarBackup2.extend.split.SplitConfig._getSplitSize CedarBackup2.extend.split.SplitConfig-class.html#_getSplitSize CedarBackup2.extend.split.SplitConfig.__repr__ CedarBackup2.extend.split.SplitConfig-class.html#__repr__ CedarBackup2.extend.split.SplitConfig.sizeLimit CedarBackup2.extend.split.SplitConfig-class.html#sizeLimit CedarBackup2.extend.split.SplitConfig._getSizeLimit CedarBackup2.extend.split.SplitConfig-class.html#_getSizeLimit CedarBackup2.extend.split.SplitConfig.__init__ CedarBackup2.extend.split.SplitConfig-class.html#__init__ CedarBackup2.extend.subversion.BDBRepository CedarBackup2.extend.subversion.BDBRepository-class.html CedarBackup2.extend.subversion.Repository._getCollectMode CedarBackup2.extend.subversion.Repository-class.html#_getCollectMode CedarBackup2.extend.subversion.Repository.__str__ CedarBackup2.extend.subversion.Repository-class.html#__str__ CedarBackup2.extend.subversion.BDBRepository.__init__ CedarBackup2.extend.subversion.BDBRepository-class.html#__init__ CedarBackup2.extend.subversion.Repository._setCollectMode CedarBackup2.extend.subversion.Repository-class.html#_setCollectMode CedarBackup2.extend.subversion.Repository.__cmp__ CedarBackup2.extend.subversion.Repository-class.html#__cmp__ CedarBackup2.extend.subversion.Repository._setRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryType CedarBackup2.extend.subversion.Repository.repositoryType CedarBackup2.extend.subversion.Repository-class.html#repositoryType CedarBackup2.extend.subversion.Repository.compressMode CedarBackup2.extend.subversion.Repository-class.html#compressMode CedarBackup2.extend.subversion.Repository._setRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryPath CedarBackup2.extend.subversion.Repository._getRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryType CedarBackup2.extend.subversion.Repository._setCompressMode CedarBackup2.extend.subversion.Repository-class.html#_setCompressMode CedarBackup2.extend.subversion.Repository.collectMode CedarBackup2.extend.subversion.Repository-class.html#collectMode CedarBackup2.extend.subversion.Repository._getCompressMode CedarBackup2.extend.subversion.Repository-class.html#_getCompressMode CedarBackup2.extend.subversion.Repository.repositoryPath CedarBackup2.extend.subversion.Repository-class.html#repositoryPath CedarBackup2.extend.subversion.BDBRepository.__repr__ CedarBackup2.extend.subversion.BDBRepository-class.html#__repr__ CedarBackup2.extend.subversion.Repository._getRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryPath CedarBackup2.extend.subversion.FSFSRepository CedarBackup2.extend.subversion.FSFSRepository-class.html CedarBackup2.extend.subversion.Repository._getCollectMode CedarBackup2.extend.subversion.Repository-class.html#_getCollectMode CedarBackup2.extend.subversion.Repository.__str__ CedarBackup2.extend.subversion.Repository-class.html#__str__ CedarBackup2.extend.subversion.FSFSRepository.__init__ CedarBackup2.extend.subversion.FSFSRepository-class.html#__init__ CedarBackup2.extend.subversion.Repository._setCollectMode CedarBackup2.extend.subversion.Repository-class.html#_setCollectMode CedarBackup2.extend.subversion.Repository.__cmp__ CedarBackup2.extend.subversion.Repository-class.html#__cmp__ CedarBackup2.extend.subversion.Repository._setRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryType CedarBackup2.extend.subversion.Repository.repositoryType CedarBackup2.extend.subversion.Repository-class.html#repositoryType CedarBackup2.extend.subversion.Repository.compressMode CedarBackup2.extend.subversion.Repository-class.html#compressMode CedarBackup2.extend.subversion.Repository._setRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryPath CedarBackup2.extend.subversion.Repository._getRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryType CedarBackup2.extend.subversion.Repository._setCompressMode CedarBackup2.extend.subversion.Repository-class.html#_setCompressMode CedarBackup2.extend.subversion.Repository.collectMode CedarBackup2.extend.subversion.Repository-class.html#collectMode CedarBackup2.extend.subversion.Repository._getCompressMode CedarBackup2.extend.subversion.Repository-class.html#_getCompressMode CedarBackup2.extend.subversion.Repository.repositoryPath CedarBackup2.extend.subversion.Repository-class.html#repositoryPath CedarBackup2.extend.subversion.FSFSRepository.__repr__ CedarBackup2.extend.subversion.FSFSRepository-class.html#__repr__ CedarBackup2.extend.subversion.Repository._getRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryPath CedarBackup2.extend.subversion.LocalConfig CedarBackup2.extend.subversion.LocalConfig-class.html CedarBackup2.extend.subversion.LocalConfig._getSubversion CedarBackup2.extend.subversion.LocalConfig-class.html#_getSubversion CedarBackup2.extend.subversion.LocalConfig.__str__ CedarBackup2.extend.subversion.LocalConfig-class.html#__str__ CedarBackup2.extend.subversion.LocalConfig._parseXmlData CedarBackup2.extend.subversion.LocalConfig-class.html#_parseXmlData CedarBackup2.extend.subversion.LocalConfig.__init__ CedarBackup2.extend.subversion.LocalConfig-class.html#__init__ CedarBackup2.extend.subversion.LocalConfig.__cmp__ CedarBackup2.extend.subversion.LocalConfig-class.html#__cmp__ CedarBackup2.extend.subversion.LocalConfig.subversion CedarBackup2.extend.subversion.LocalConfig-class.html#subversion CedarBackup2.extend.subversion.LocalConfig._parseRepositories CedarBackup2.extend.subversion.LocalConfig-class.html#_parseRepositories CedarBackup2.extend.subversion.LocalConfig._setSubversion CedarBackup2.extend.subversion.LocalConfig-class.html#_setSubversion CedarBackup2.extend.subversion.LocalConfig._parseSubversion CedarBackup2.extend.subversion.LocalConfig-class.html#_parseSubversion CedarBackup2.extend.subversion.LocalConfig.addConfig CedarBackup2.extend.subversion.LocalConfig-class.html#addConfig CedarBackup2.extend.subversion.LocalConfig.validate CedarBackup2.extend.subversion.LocalConfig-class.html#validate CedarBackup2.extend.subversion.LocalConfig._addRepository CedarBackup2.extend.subversion.LocalConfig-class.html#_addRepository CedarBackup2.extend.subversion.LocalConfig._parseExclusions CedarBackup2.extend.subversion.LocalConfig-class.html#_parseExclusions CedarBackup2.extend.subversion.LocalConfig.__repr__ CedarBackup2.extend.subversion.LocalConfig-class.html#__repr__ CedarBackup2.extend.subversion.LocalConfig._parseRepositoryDirs CedarBackup2.extend.subversion.LocalConfig-class.html#_parseRepositoryDirs CedarBackup2.extend.subversion.LocalConfig._addRepositoryDir CedarBackup2.extend.subversion.LocalConfig-class.html#_addRepositoryDir CedarBackup2.extend.subversion.Repository CedarBackup2.extend.subversion.Repository-class.html CedarBackup2.extend.subversion.Repository._getCollectMode CedarBackup2.extend.subversion.Repository-class.html#_getCollectMode CedarBackup2.extend.subversion.Repository.__str__ CedarBackup2.extend.subversion.Repository-class.html#__str__ CedarBackup2.extend.subversion.Repository.__init__ CedarBackup2.extend.subversion.Repository-class.html#__init__ CedarBackup2.extend.subversion.Repository._setRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryType CedarBackup2.extend.subversion.Repository.__cmp__ CedarBackup2.extend.subversion.Repository-class.html#__cmp__ CedarBackup2.extend.subversion.Repository._setCollectMode CedarBackup2.extend.subversion.Repository-class.html#_setCollectMode CedarBackup2.extend.subversion.Repository.repositoryType CedarBackup2.extend.subversion.Repository-class.html#repositoryType CedarBackup2.extend.subversion.Repository.compressMode CedarBackup2.extend.subversion.Repository-class.html#compressMode CedarBackup2.extend.subversion.Repository._setRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_setRepositoryPath CedarBackup2.extend.subversion.Repository._getRepositoryType CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryType CedarBackup2.extend.subversion.Repository._setCompressMode CedarBackup2.extend.subversion.Repository-class.html#_setCompressMode CedarBackup2.extend.subversion.Repository.collectMode CedarBackup2.extend.subversion.Repository-class.html#collectMode CedarBackup2.extend.subversion.Repository._getCompressMode CedarBackup2.extend.subversion.Repository-class.html#_getCompressMode CedarBackup2.extend.subversion.Repository.repositoryPath CedarBackup2.extend.subversion.Repository-class.html#repositoryPath CedarBackup2.extend.subversion.Repository.__repr__ CedarBackup2.extend.subversion.Repository-class.html#__repr__ CedarBackup2.extend.subversion.Repository._getRepositoryPath CedarBackup2.extend.subversion.Repository-class.html#_getRepositoryPath CedarBackup2.extend.subversion.RepositoryDir CedarBackup2.extend.subversion.RepositoryDir-class.html CedarBackup2.extend.subversion.RepositoryDir.directoryPath CedarBackup2.extend.subversion.RepositoryDir-class.html#directoryPath CedarBackup2.extend.subversion.RepositoryDir._getCollectMode CedarBackup2.extend.subversion.RepositoryDir-class.html#_getCollectMode CedarBackup2.extend.subversion.RepositoryDir._getCompressMode CedarBackup2.extend.subversion.RepositoryDir-class.html#_getCompressMode CedarBackup2.extend.subversion.RepositoryDir.repositoryType CedarBackup2.extend.subversion.RepositoryDir-class.html#repositoryType CedarBackup2.extend.subversion.RepositoryDir._setExcludePatterns CedarBackup2.extend.subversion.RepositoryDir-class.html#_setExcludePatterns CedarBackup2.extend.subversion.RepositoryDir.__init__ CedarBackup2.extend.subversion.RepositoryDir-class.html#__init__ CedarBackup2.extend.subversion.RepositoryDir._setRepositoryType CedarBackup2.extend.subversion.RepositoryDir-class.html#_setRepositoryType CedarBackup2.extend.subversion.RepositoryDir.__cmp__ CedarBackup2.extend.subversion.RepositoryDir-class.html#__cmp__ CedarBackup2.extend.subversion.RepositoryDir._setCollectMode CedarBackup2.extend.subversion.RepositoryDir-class.html#_setCollectMode CedarBackup2.extend.subversion.RepositoryDir.__str__ CedarBackup2.extend.subversion.RepositoryDir-class.html#__str__ CedarBackup2.extend.subversion.RepositoryDir.relativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir-class.html#relativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir.compressMode CedarBackup2.extend.subversion.RepositoryDir-class.html#compressMode CedarBackup2.extend.subversion.RepositoryDir._getRepositoryType CedarBackup2.extend.subversion.RepositoryDir-class.html#_getRepositoryType CedarBackup2.extend.subversion.RepositoryDir._getRelativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir-class.html#_getRelativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir._setDirectoryPath CedarBackup2.extend.subversion.RepositoryDir-class.html#_setDirectoryPath CedarBackup2.extend.subversion.RepositoryDir._setCompressMode CedarBackup2.extend.subversion.RepositoryDir-class.html#_setCompressMode CedarBackup2.extend.subversion.RepositoryDir._setRelativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir-class.html#_setRelativeExcludePaths CedarBackup2.extend.subversion.RepositoryDir.collectMode CedarBackup2.extend.subversion.RepositoryDir-class.html#collectMode CedarBackup2.extend.subversion.RepositoryDir._getExcludePatterns CedarBackup2.extend.subversion.RepositoryDir-class.html#_getExcludePatterns CedarBackup2.extend.subversion.RepositoryDir.excludePatterns CedarBackup2.extend.subversion.RepositoryDir-class.html#excludePatterns CedarBackup2.extend.subversion.RepositoryDir.__repr__ CedarBackup2.extend.subversion.RepositoryDir-class.html#__repr__ CedarBackup2.extend.subversion.RepositoryDir._getDirectoryPath CedarBackup2.extend.subversion.RepositoryDir-class.html#_getDirectoryPath CedarBackup2.extend.subversion.SubversionConfig CedarBackup2.extend.subversion.SubversionConfig-class.html CedarBackup2.extend.subversion.SubversionConfig._getCollectMode CedarBackup2.extend.subversion.SubversionConfig-class.html#_getCollectMode CedarBackup2.extend.subversion.SubversionConfig._getCompressMode CedarBackup2.extend.subversion.SubversionConfig-class.html#_getCompressMode CedarBackup2.extend.subversion.SubversionConfig.__str__ CedarBackup2.extend.subversion.SubversionConfig-class.html#__str__ CedarBackup2.extend.subversion.SubversionConfig._getRepositories CedarBackup2.extend.subversion.SubversionConfig-class.html#_getRepositories CedarBackup2.extend.subversion.SubversionConfig.__init__ CedarBackup2.extend.subversion.SubversionConfig-class.html#__init__ CedarBackup2.extend.subversion.SubversionConfig._setCollectMode CedarBackup2.extend.subversion.SubversionConfig-class.html#_setCollectMode CedarBackup2.extend.subversion.SubversionConfig.__cmp__ CedarBackup2.extend.subversion.SubversionConfig-class.html#__cmp__ CedarBackup2.extend.subversion.SubversionConfig.repositoryDirs CedarBackup2.extend.subversion.SubversionConfig-class.html#repositoryDirs CedarBackup2.extend.subversion.SubversionConfig.compressMode CedarBackup2.extend.subversion.SubversionConfig-class.html#compressMode CedarBackup2.extend.subversion.SubversionConfig._setCompressMode CedarBackup2.extend.subversion.SubversionConfig-class.html#_setCompressMode CedarBackup2.extend.subversion.SubversionConfig._getRepositoryDirs CedarBackup2.extend.subversion.SubversionConfig-class.html#_getRepositoryDirs CedarBackup2.extend.subversion.SubversionConfig.collectMode CedarBackup2.extend.subversion.SubversionConfig-class.html#collectMode CedarBackup2.extend.subversion.SubversionConfig.repositories CedarBackup2.extend.subversion.SubversionConfig-class.html#repositories CedarBackup2.extend.subversion.SubversionConfig._setRepositoryDirs CedarBackup2.extend.subversion.SubversionConfig-class.html#_setRepositoryDirs CedarBackup2.extend.subversion.SubversionConfig.__repr__ CedarBackup2.extend.subversion.SubversionConfig-class.html#__repr__ CedarBackup2.extend.subversion.SubversionConfig._setRepositories CedarBackup2.extend.subversion.SubversionConfig-class.html#_setRepositories CedarBackup2.filesystem.BackupFileList CedarBackup2.filesystem.BackupFileList-class.html CedarBackup2.filesystem.FilesystemList._addDirContentsInternal CedarBackup2.filesystem.FilesystemList-class.html#_addDirContentsInternal CedarBackup2.filesystem.BackupFileList.removeUnchanged CedarBackup2.filesystem.BackupFileList-class.html#removeUnchanged CedarBackup2.filesystem.FilesystemList._getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeBasenamePatterns CedarBackup2.filesystem.BackupFileList.generateFitted CedarBackup2.filesystem.BackupFileList-class.html#generateFitted CedarBackup2.filesystem.FilesystemList.addDirContents CedarBackup2.filesystem.FilesystemList-class.html#addDirContents CedarBackup2.filesystem.FilesystemList._getExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePatterns CedarBackup2.filesystem.FilesystemList.excludePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludePatterns CedarBackup2.filesystem.FilesystemList._setExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeFiles CedarBackup2.filesystem.BackupFileList.generateSizeMap CedarBackup2.filesystem.BackupFileList-class.html#generateSizeMap CedarBackup2.filesystem.FilesystemList.ignoreFile CedarBackup2.filesystem.FilesystemList-class.html#ignoreFile CedarBackup2.filesystem.BackupFileList.totalSize CedarBackup2.filesystem.BackupFileList-class.html#totalSize CedarBackup2.filesystem.BackupFileList.addDir CedarBackup2.filesystem.BackupFileList-class.html#addDir CedarBackup2.filesystem.FilesystemList.removeFiles CedarBackup2.filesystem.FilesystemList-class.html#removeFiles CedarBackup2.filesystem.FilesystemList.removeLinks CedarBackup2.filesystem.FilesystemList-class.html#removeLinks CedarBackup2.filesystem.BackupFileList.generateTarfile CedarBackup2.filesystem.BackupFileList-class.html#generateTarfile CedarBackup2.filesystem.FilesystemList.removeMatch CedarBackup2.filesystem.FilesystemList-class.html#removeMatch CedarBackup2.filesystem.FilesystemList.excludeLinks CedarBackup2.filesystem.FilesystemList-class.html#excludeLinks CedarBackup2.filesystem.FilesystemList._getExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeDirs CedarBackup2.filesystem.FilesystemList.excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludeBasenamePatterns CedarBackup2.filesystem.BackupFileList._getKnapsackFunction CedarBackup2.filesystem.BackupFileList-class.html#_getKnapsackFunction CedarBackup2.filesystem.FilesystemList._setIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_setIgnoreFile CedarBackup2.filesystem.FilesystemList._getIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_getIgnoreFile CedarBackup2.filesystem.FilesystemList.addFile CedarBackup2.filesystem.FilesystemList-class.html#addFile CedarBackup2.filesystem.BackupFileList.generateDigestMap CedarBackup2.filesystem.BackupFileList-class.html#generateDigestMap CedarBackup2.filesystem.FilesystemList.removeInvalid CedarBackup2.filesystem.FilesystemList-class.html#removeInvalid CedarBackup2.filesystem.FilesystemList._setExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePatterns CedarBackup2.filesystem.FilesystemList.removeDirs CedarBackup2.filesystem.FilesystemList-class.html#removeDirs CedarBackup2.filesystem.BackupFileList.__init__ CedarBackup2.filesystem.BackupFileList-class.html#__init__ CedarBackup2.filesystem.FilesystemList.normalize CedarBackup2.filesystem.FilesystemList-class.html#normalize CedarBackup2.filesystem.FilesystemList.excludeFiles CedarBackup2.filesystem.FilesystemList-class.html#excludeFiles CedarBackup2.filesystem.FilesystemList._getExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeLinks CedarBackup2.filesystem.FilesystemList.verify CedarBackup2.filesystem.FilesystemList-class.html#verify CedarBackup2.filesystem.FilesystemList.excludeDirs CedarBackup2.filesystem.FilesystemList-class.html#excludeDirs CedarBackup2.filesystem.FilesystemList._setExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeDirs CedarBackup2.filesystem.FilesystemList._setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeBasenamePatterns CedarBackup2.filesystem.BackupFileList.generateSpan CedarBackup2.filesystem.BackupFileList-class.html#generateSpan CedarBackup2.filesystem.FilesystemList._getExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePaths CedarBackup2.filesystem.FilesystemList._setExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePaths CedarBackup2.filesystem.BackupFileList._getKnapsackTable CedarBackup2.filesystem.BackupFileList-class.html#_getKnapsackTable CedarBackup2.filesystem.FilesystemList._setExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeLinks CedarBackup2.filesystem.FilesystemList.excludePaths CedarBackup2.filesystem.FilesystemList-class.html#excludePaths CedarBackup2.filesystem.BackupFileList._generateDigest CedarBackup2.filesystem.BackupFileList-class.html#_generateDigest CedarBackup2.filesystem.FilesystemList._getExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeFiles CedarBackup2.filesystem.FilesystemList CedarBackup2.filesystem.FilesystemList-class.html CedarBackup2.filesystem.FilesystemList._setExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeFiles CedarBackup2.filesystem.FilesystemList._addDirContentsInternal CedarBackup2.filesystem.FilesystemList-class.html#_addDirContentsInternal CedarBackup2.filesystem.FilesystemList.removeInvalid CedarBackup2.filesystem.FilesystemList-class.html#removeInvalid CedarBackup2.filesystem.FilesystemList.excludeLinks CedarBackup2.filesystem.FilesystemList-class.html#excludeLinks CedarBackup2.filesystem.FilesystemList._getExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeDirs CedarBackup2.filesystem.FilesystemList._setExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePatterns CedarBackup2.filesystem.FilesystemList.excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList.removeDirs CedarBackup2.filesystem.FilesystemList-class.html#removeDirs CedarBackup2.filesystem.FilesystemList.__init__ CedarBackup2.filesystem.FilesystemList-class.html#__init__ CedarBackup2.filesystem.FilesystemList.normalize CedarBackup2.filesystem.FilesystemList-class.html#normalize CedarBackup2.filesystem.FilesystemList.excludeFiles CedarBackup2.filesystem.FilesystemList-class.html#excludeFiles CedarBackup2.filesystem.FilesystemList._getExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeLinks CedarBackup2.filesystem.FilesystemList.verify CedarBackup2.filesystem.FilesystemList-class.html#verify CedarBackup2.filesystem.FilesystemList.addDir CedarBackup2.filesystem.FilesystemList-class.html#addDir CedarBackup2.filesystem.FilesystemList._setIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_setIgnoreFile CedarBackup2.filesystem.FilesystemList.removeFiles CedarBackup2.filesystem.FilesystemList-class.html#removeFiles CedarBackup2.filesystem.FilesystemList.excludeDirs CedarBackup2.filesystem.FilesystemList-class.html#excludeDirs CedarBackup2.filesystem.FilesystemList._setExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeDirs CedarBackup2.filesystem.FilesystemList.ignoreFile CedarBackup2.filesystem.FilesystemList-class.html#ignoreFile CedarBackup2.filesystem.FilesystemList._setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList.removeLinks CedarBackup2.filesystem.FilesystemList-class.html#removeLinks CedarBackup2.filesystem.FilesystemList._getExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePaths CedarBackup2.filesystem.FilesystemList._getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList._setExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePaths CedarBackup2.filesystem.FilesystemList._getIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_getIgnoreFile CedarBackup2.filesystem.FilesystemList._setExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeLinks CedarBackup2.filesystem.FilesystemList.addDirContents CedarBackup2.filesystem.FilesystemList-class.html#addDirContents CedarBackup2.filesystem.FilesystemList.excludePaths CedarBackup2.filesystem.FilesystemList-class.html#excludePaths CedarBackup2.filesystem.FilesystemList.addFile CedarBackup2.filesystem.FilesystemList-class.html#addFile CedarBackup2.filesystem.FilesystemList._getExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePatterns CedarBackup2.filesystem.FilesystemList.excludePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludePatterns CedarBackup2.filesystem.FilesystemList.removeMatch CedarBackup2.filesystem.FilesystemList-class.html#removeMatch CedarBackup2.filesystem.FilesystemList._getExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeFiles CedarBackup2.filesystem.PurgeItemList CedarBackup2.filesystem.PurgeItemList-class.html CedarBackup2.filesystem.FilesystemList._setExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeFiles CedarBackup2.filesystem.FilesystemList._addDirContentsInternal CedarBackup2.filesystem.FilesystemList-class.html#_addDirContentsInternal CedarBackup2.filesystem.FilesystemList.removeInvalid CedarBackup2.filesystem.FilesystemList-class.html#removeInvalid CedarBackup2.filesystem.FilesystemList.excludeLinks CedarBackup2.filesystem.FilesystemList-class.html#excludeLinks CedarBackup2.filesystem.FilesystemList._getExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeDirs CedarBackup2.filesystem.FilesystemList._setExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePatterns CedarBackup2.filesystem.FilesystemList.excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludeBasenamePatterns CedarBackup2.filesystem.FilesystemList.removeDirs CedarBackup2.filesystem.FilesystemList-class.html#removeDirs CedarBackup2.filesystem.PurgeItemList.__init__ CedarBackup2.filesystem.PurgeItemList-class.html#__init__ CedarBackup2.filesystem.FilesystemList.normalize CedarBackup2.filesystem.FilesystemList-class.html#normalize CedarBackup2.filesystem.FilesystemList.excludeFiles CedarBackup2.filesystem.FilesystemList-class.html#excludeFiles CedarBackup2.filesystem.FilesystemList._getExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeLinks CedarBackup2.filesystem.FilesystemList.verify CedarBackup2.filesystem.FilesystemList-class.html#verify CedarBackup2.filesystem.FilesystemList.addDir CedarBackup2.filesystem.FilesystemList-class.html#addDir CedarBackup2.filesystem.FilesystemList._setIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_setIgnoreFile CedarBackup2.filesystem.FilesystemList.removeFiles CedarBackup2.filesystem.FilesystemList-class.html#removeFiles CedarBackup2.filesystem.FilesystemList.excludeDirs CedarBackup2.filesystem.FilesystemList-class.html#excludeDirs CedarBackup2.filesystem.FilesystemList._setExcludeDirs CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeDirs CedarBackup2.filesystem.PurgeItemList.removeYoungFiles CedarBackup2.filesystem.PurgeItemList-class.html#removeYoungFiles CedarBackup2.filesystem.FilesystemList.ignoreFile CedarBackup2.filesystem.FilesystemList-class.html#ignoreFile CedarBackup2.filesystem.FilesystemList._setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList.removeLinks CedarBackup2.filesystem.FilesystemList-class.html#removeLinks CedarBackup2.filesystem.PurgeItemList.purgeItems CedarBackup2.filesystem.PurgeItemList-class.html#purgeItems CedarBackup2.filesystem.FilesystemList._getExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePaths CedarBackup2.filesystem.FilesystemList._getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeBasenamePatterns CedarBackup2.filesystem.FilesystemList._setExcludePaths CedarBackup2.filesystem.FilesystemList-class.html#_setExcludePaths CedarBackup2.filesystem.FilesystemList._getIgnoreFile CedarBackup2.filesystem.FilesystemList-class.html#_getIgnoreFile CedarBackup2.filesystem.FilesystemList._setExcludeLinks CedarBackup2.filesystem.FilesystemList-class.html#_setExcludeLinks CedarBackup2.filesystem.FilesystemList.excludePaths CedarBackup2.filesystem.FilesystemList-class.html#excludePaths CedarBackup2.filesystem.PurgeItemList.addDirContents CedarBackup2.filesystem.PurgeItemList-class.html#addDirContents CedarBackup2.filesystem.FilesystemList.addFile CedarBackup2.filesystem.FilesystemList-class.html#addFile CedarBackup2.filesystem.FilesystemList._getExcludePatterns CedarBackup2.filesystem.FilesystemList-class.html#_getExcludePatterns CedarBackup2.filesystem.FilesystemList.excludePatterns CedarBackup2.filesystem.FilesystemList-class.html#excludePatterns CedarBackup2.filesystem.FilesystemList.removeMatch CedarBackup2.filesystem.FilesystemList-class.html#removeMatch CedarBackup2.filesystem.FilesystemList._getExcludeFiles CedarBackup2.filesystem.FilesystemList-class.html#_getExcludeFiles CedarBackup2.filesystem.SpanItem CedarBackup2.filesystem.SpanItem-class.html CedarBackup2.filesystem.SpanItem.__init__ CedarBackup2.filesystem.SpanItem-class.html#__init__ CedarBackup2.peer.LocalPeer CedarBackup2.peer.LocalPeer-class.html CedarBackup2.peer.LocalPeer._copyLocalFile CedarBackup2.peer.LocalPeer-class.html#_copyLocalFile CedarBackup2.peer.LocalPeer._setIgnoreFailureMode CedarBackup2.peer.LocalPeer-class.html#_setIgnoreFailureMode CedarBackup2.peer.LocalPeer._getName CedarBackup2.peer.LocalPeer-class.html#_getName CedarBackup2.peer.LocalPeer.__init__ CedarBackup2.peer.LocalPeer-class.html#__init__ CedarBackup2.peer.LocalPeer.checkCollectIndicator CedarBackup2.peer.LocalPeer-class.html#checkCollectIndicator CedarBackup2.peer.LocalPeer.writeStageIndicator CedarBackup2.peer.LocalPeer-class.html#writeStageIndicator CedarBackup2.peer.LocalPeer._getIgnoreFailureMode CedarBackup2.peer.LocalPeer-class.html#_getIgnoreFailureMode CedarBackup2.peer.LocalPeer._copyLocalDir CedarBackup2.peer.LocalPeer-class.html#_copyLocalDir CedarBackup2.peer.LocalPeer.ignoreFailureMode CedarBackup2.peer.LocalPeer-class.html#ignoreFailureMode CedarBackup2.peer.LocalPeer._getCollectDir CedarBackup2.peer.LocalPeer-class.html#_getCollectDir CedarBackup2.peer.LocalPeer.name CedarBackup2.peer.LocalPeer-class.html#name CedarBackup2.peer.LocalPeer.collectDir CedarBackup2.peer.LocalPeer-class.html#collectDir CedarBackup2.peer.LocalPeer._setCollectDir CedarBackup2.peer.LocalPeer-class.html#_setCollectDir CedarBackup2.peer.LocalPeer.stagePeer CedarBackup2.peer.LocalPeer-class.html#stagePeer CedarBackup2.peer.LocalPeer._setName CedarBackup2.peer.LocalPeer-class.html#_setName CedarBackup2.peer.RemotePeer CedarBackup2.peer.RemotePeer-class.html CedarBackup2.peer.RemotePeer._getWorkingDir CedarBackup2.peer.RemotePeer-class.html#_getWorkingDir CedarBackup2.peer.RemotePeer._setLocalUser CedarBackup2.peer.RemotePeer-class.html#_setLocalUser CedarBackup2.peer.RemotePeer._getLocalUser CedarBackup2.peer.RemotePeer-class.html#_getLocalUser CedarBackup2.peer.RemotePeer._getRcpCommand CedarBackup2.peer.RemotePeer-class.html#_getRcpCommand CedarBackup2.peer.RemotePeer._copyRemoteFile CedarBackup2.peer.RemotePeer-class.html#_copyRemoteFile CedarBackup2.peer.RemotePeer._buildCbackCommand CedarBackup2.peer.RemotePeer-class.html#_buildCbackCommand CedarBackup2.peer.RemotePeer.cbackCommand CedarBackup2.peer.RemotePeer-class.html#cbackCommand CedarBackup2.peer.RemotePeer._setIgnoreFailureMode CedarBackup2.peer.RemotePeer-class.html#_setIgnoreFailureMode CedarBackup2.peer.RemotePeer.localUser CedarBackup2.peer.RemotePeer-class.html#localUser CedarBackup2.peer.RemotePeer.executeRemoteCommand CedarBackup2.peer.RemotePeer-class.html#executeRemoteCommand CedarBackup2.peer.RemotePeer._getName CedarBackup2.peer.RemotePeer-class.html#_getName CedarBackup2.peer.RemotePeer.__init__ CedarBackup2.peer.RemotePeer-class.html#__init__ CedarBackup2.peer.RemotePeer.writeStageIndicator CedarBackup2.peer.RemotePeer-class.html#writeStageIndicator CedarBackup2.peer.RemotePeer._setCbackCommand CedarBackup2.peer.RemotePeer-class.html#_setCbackCommand CedarBackup2.peer.RemotePeer._getCbackCommand CedarBackup2.peer.RemotePeer-class.html#_getCbackCommand CedarBackup2.peer.RemotePeer.remoteUser CedarBackup2.peer.RemotePeer-class.html#remoteUser CedarBackup2.peer.RemotePeer.workingDir CedarBackup2.peer.RemotePeer-class.html#workingDir CedarBackup2.peer.RemotePeer.checkCollectIndicator CedarBackup2.peer.RemotePeer-class.html#checkCollectIndicator CedarBackup2.peer.RemotePeer._getDirContents CedarBackup2.peer.RemotePeer-class.html#_getDirContents CedarBackup2.peer.RemotePeer._copyRemoteDir CedarBackup2.peer.RemotePeer-class.html#_copyRemoteDir CedarBackup2.peer.RemotePeer.executeManagedAction CedarBackup2.peer.RemotePeer-class.html#executeManagedAction CedarBackup2.peer.RemotePeer._getIgnoreFailureMode CedarBackup2.peer.RemotePeer-class.html#_getIgnoreFailureMode CedarBackup2.peer.RemotePeer.ignoreFailureMode CedarBackup2.peer.RemotePeer-class.html#ignoreFailureMode CedarBackup2.peer.RemotePeer._setWorkingDir CedarBackup2.peer.RemotePeer-class.html#_setWorkingDir CedarBackup2.peer.RemotePeer.rcpCommand CedarBackup2.peer.RemotePeer-class.html#rcpCommand CedarBackup2.peer.RemotePeer.rshCommand CedarBackup2.peer.RemotePeer-class.html#rshCommand CedarBackup2.peer.RemotePeer.name CedarBackup2.peer.RemotePeer-class.html#name CedarBackup2.peer.RemotePeer._getCollectDir CedarBackup2.peer.RemotePeer-class.html#_getCollectDir CedarBackup2.peer.RemotePeer._setRemoteUser CedarBackup2.peer.RemotePeer-class.html#_setRemoteUser CedarBackup2.peer.RemotePeer._setRcpCommand CedarBackup2.peer.RemotePeer-class.html#_setRcpCommand CedarBackup2.peer.RemotePeer._executeRemoteCommand CedarBackup2.peer.RemotePeer-class.html#_executeRemoteCommand CedarBackup2.peer.RemotePeer.collectDir CedarBackup2.peer.RemotePeer-class.html#collectDir CedarBackup2.peer.RemotePeer._setCollectDir CedarBackup2.peer.RemotePeer-class.html#_setCollectDir CedarBackup2.peer.RemotePeer._getRemoteUser CedarBackup2.peer.RemotePeer-class.html#_getRemoteUser CedarBackup2.peer.RemotePeer.stagePeer CedarBackup2.peer.RemotePeer-class.html#stagePeer CedarBackup2.peer.RemotePeer._pushLocalFile CedarBackup2.peer.RemotePeer-class.html#_pushLocalFile CedarBackup2.peer.RemotePeer._setName CedarBackup2.peer.RemotePeer-class.html#_setName CedarBackup2.peer.RemotePeer._getRshCommand CedarBackup2.peer.RemotePeer-class.html#_getRshCommand CedarBackup2.peer.RemotePeer._setRshCommand CedarBackup2.peer.RemotePeer-class.html#_setRshCommand CedarBackup2.tools.amazons3.Options CedarBackup2.tools.amazons3.Options-class.html CedarBackup2.tools.amazons3.Options._getMode CedarBackup2.tools.amazons3.Options-class.html#_getMode CedarBackup2.tools.amazons3.Options.stacktrace CedarBackup2.tools.amazons3.Options-class.html#stacktrace CedarBackup2.tools.amazons3.Options.help CedarBackup2.tools.amazons3.Options-class.html#help CedarBackup2.tools.amazons3.Options.__str__ CedarBackup2.tools.amazons3.Options-class.html#__str__ CedarBackup2.tools.amazons3.Options._setS3BucketUrl CedarBackup2.tools.amazons3.Options-class.html#_setS3BucketUrl CedarBackup2.tools.amazons3.Options._setStacktrace CedarBackup2.tools.amazons3.Options-class.html#_setStacktrace CedarBackup2.tools.amazons3.Options.verifyOnly CedarBackup2.tools.amazons3.Options-class.html#verifyOnly CedarBackup2.tools.amazons3.Options.owner CedarBackup2.tools.amazons3.Options-class.html#owner CedarBackup2.tools.amazons3.Options._setQuiet CedarBackup2.tools.amazons3.Options-class.html#_setQuiet CedarBackup2.tools.amazons3.Options._setVersion CedarBackup2.tools.amazons3.Options-class.html#_setVersion CedarBackup2.tools.amazons3.Options._setSourceDir CedarBackup2.tools.amazons3.Options-class.html#_setSourceDir CedarBackup2.tools.amazons3.Options._getVerbose CedarBackup2.tools.amazons3.Options-class.html#_getVerbose CedarBackup2.tools.amazons3.Options.verbose CedarBackup2.tools.amazons3.Options-class.html#verbose CedarBackup2.tools.amazons3.Options._setHelp CedarBackup2.tools.amazons3.Options-class.html#_setHelp CedarBackup2.tools.amazons3.Options._getVerifyOnly CedarBackup2.tools.amazons3.Options-class.html#_getVerifyOnly CedarBackup2.tools.amazons3.Options._getDebug CedarBackup2.tools.amazons3.Options-class.html#_getDebug CedarBackup2.tools.amazons3.Options.sourceDir CedarBackup2.tools.amazons3.Options-class.html#sourceDir CedarBackup2.tools.amazons3.Options._parseArgumentList CedarBackup2.tools.amazons3.Options-class.html#_parseArgumentList CedarBackup2.tools.amazons3.Options.buildArgumentList CedarBackup2.tools.amazons3.Options-class.html#buildArgumentList CedarBackup2.tools.amazons3.Options.__cmp__ CedarBackup2.tools.amazons3.Options-class.html#__cmp__ CedarBackup2.tools.amazons3.Options._getStacktrace CedarBackup2.tools.amazons3.Options-class.html#_getStacktrace CedarBackup2.tools.amazons3.Options._setOwner CedarBackup2.tools.amazons3.Options-class.html#_setOwner CedarBackup2.tools.amazons3.Options._setMode CedarBackup2.tools.amazons3.Options-class.html#_setMode CedarBackup2.tools.amazons3.Options.__init__ CedarBackup2.tools.amazons3.Options-class.html#__init__ CedarBackup2.tools.amazons3.Options._getQuiet CedarBackup2.tools.amazons3.Options-class.html#_getQuiet CedarBackup2.tools.amazons3.Options.mode CedarBackup2.tools.amazons3.Options-class.html#mode CedarBackup2.tools.amazons3.Options._getVersion CedarBackup2.tools.amazons3.Options-class.html#_getVersion CedarBackup2.tools.amazons3.Options._getLogfile CedarBackup2.tools.amazons3.Options-class.html#_getLogfile CedarBackup2.tools.amazons3.Options._setOutput CedarBackup2.tools.amazons3.Options-class.html#_setOutput CedarBackup2.tools.amazons3.Options.version CedarBackup2.tools.amazons3.Options-class.html#version CedarBackup2.tools.amazons3.Options._setVerifyOnly CedarBackup2.tools.amazons3.Options-class.html#_setVerifyOnly CedarBackup2.tools.amazons3.Options.debug CedarBackup2.tools.amazons3.Options-class.html#debug CedarBackup2.tools.amazons3.Options.ignoreWarnings CedarBackup2.tools.amazons3.Options-class.html#ignoreWarnings CedarBackup2.tools.amazons3.Options._setDiagnostics CedarBackup2.tools.amazons3.Options-class.html#_setDiagnostics CedarBackup2.tools.amazons3.Options.validate CedarBackup2.tools.amazons3.Options-class.html#validate CedarBackup2.tools.amazons3.Options.logfile CedarBackup2.tools.amazons3.Options-class.html#logfile CedarBackup2.tools.amazons3.Options.buildArgumentString CedarBackup2.tools.amazons3.Options-class.html#buildArgumentString CedarBackup2.tools.amazons3.Options._setDebug CedarBackup2.tools.amazons3.Options-class.html#_setDebug CedarBackup2.tools.amazons3.Options._setIgnoreWarnings CedarBackup2.tools.amazons3.Options-class.html#_setIgnoreWarnings CedarBackup2.tools.amazons3.Options._getSourceDir CedarBackup2.tools.amazons3.Options-class.html#_getSourceDir CedarBackup2.tools.amazons3.Options._getOwner CedarBackup2.tools.amazons3.Options-class.html#_getOwner CedarBackup2.tools.amazons3.Options.s3BucketUrl CedarBackup2.tools.amazons3.Options-class.html#s3BucketUrl CedarBackup2.tools.amazons3.Options._getHelp CedarBackup2.tools.amazons3.Options-class.html#_getHelp CedarBackup2.tools.amazons3.Options._setLogfile CedarBackup2.tools.amazons3.Options-class.html#_setLogfile CedarBackup2.tools.amazons3.Options.quiet CedarBackup2.tools.amazons3.Options-class.html#quiet CedarBackup2.tools.amazons3.Options.__repr__ CedarBackup2.tools.amazons3.Options-class.html#__repr__ CedarBackup2.tools.amazons3.Options.diagnostics CedarBackup2.tools.amazons3.Options-class.html#diagnostics CedarBackup2.tools.amazons3.Options._getDiagnostics CedarBackup2.tools.amazons3.Options-class.html#_getDiagnostics CedarBackup2.tools.amazons3.Options.output CedarBackup2.tools.amazons3.Options-class.html#output CedarBackup2.tools.amazons3.Options._setVerbose CedarBackup2.tools.amazons3.Options-class.html#_setVerbose CedarBackup2.tools.amazons3.Options._getOutput CedarBackup2.tools.amazons3.Options-class.html#_getOutput CedarBackup2.tools.amazons3.Options._getIgnoreWarnings CedarBackup2.tools.amazons3.Options-class.html#_getIgnoreWarnings CedarBackup2.tools.amazons3.Options._getS3BucketUrl CedarBackup2.tools.amazons3.Options-class.html#_getS3BucketUrl CedarBackup2.tools.span.SpanOptions CedarBackup2.tools.span.SpanOptions-class.html CedarBackup2.cli.Options._getMode CedarBackup2.cli.Options-class.html#_getMode CedarBackup2.cli.Options.stacktrace CedarBackup2.cli.Options-class.html#stacktrace CedarBackup2.cli.Options.managed CedarBackup2.cli.Options-class.html#managed CedarBackup2.cli.Options.help CedarBackup2.cli.Options-class.html#help CedarBackup2.cli.Options._getFull CedarBackup2.cli.Options-class.html#_getFull CedarBackup2.cli.Options.__str__ CedarBackup2.cli.Options-class.html#__str__ CedarBackup2.cli.Options._setStacktrace CedarBackup2.cli.Options-class.html#_setStacktrace CedarBackup2.cli.Options.actions CedarBackup2.cli.Options-class.html#actions CedarBackup2.cli.Options.owner CedarBackup2.cli.Options-class.html#owner CedarBackup2.cli.Options._setQuiet CedarBackup2.cli.Options-class.html#_setQuiet CedarBackup2.cli.Options._setVersion CedarBackup2.cli.Options-class.html#_setVersion CedarBackup2.cli.Options._getVerbose CedarBackup2.cli.Options-class.html#_getVerbose CedarBackup2.cli.Options.verbose CedarBackup2.cli.Options-class.html#verbose CedarBackup2.cli.Options._setHelp CedarBackup2.cli.Options-class.html#_setHelp CedarBackup2.cli.Options._getDiagnostics CedarBackup2.cli.Options-class.html#_getDiagnostics CedarBackup2.cli.Options._getDebug CedarBackup2.cli.Options-class.html#_getDebug CedarBackup2.cli.Options._parseArgumentList CedarBackup2.cli.Options-class.html#_parseArgumentList CedarBackup2.cli.Options.buildArgumentList CedarBackup2.cli.Options-class.html#buildArgumentList CedarBackup2.cli.Options._getManagedOnly CedarBackup2.cli.Options-class.html#_getManagedOnly CedarBackup2.cli.Options.__cmp__ CedarBackup2.cli.Options-class.html#__cmp__ CedarBackup2.cli.Options._setOutput CedarBackup2.cli.Options-class.html#_setOutput CedarBackup2.cli.Options._setOwner CedarBackup2.cli.Options-class.html#_setOwner CedarBackup2.cli.Options._setMode CedarBackup2.cli.Options-class.html#_setMode CedarBackup2.cli.Options.__init__ CedarBackup2.cli.Options-class.html#__init__ CedarBackup2.cli.Options._getQuiet CedarBackup2.cli.Options-class.html#_getQuiet CedarBackup2.cli.Options.managedOnly CedarBackup2.cli.Options-class.html#managedOnly CedarBackup2.cli.Options._getManaged CedarBackup2.cli.Options-class.html#_getManaged CedarBackup2.cli.Options.config CedarBackup2.cli.Options-class.html#config CedarBackup2.cli.Options.__repr__ CedarBackup2.cli.Options-class.html#__repr__ CedarBackup2.cli.Options._getVersion CedarBackup2.cli.Options-class.html#_getVersion CedarBackup2.cli.Options._getLogfile CedarBackup2.cli.Options-class.html#_getLogfile CedarBackup2.cli.Options.full CedarBackup2.cli.Options-class.html#full CedarBackup2.cli.Options._getConfig CedarBackup2.cli.Options-class.html#_getConfig CedarBackup2.cli.Options._getStacktrace CedarBackup2.cli.Options-class.html#_getStacktrace CedarBackup2.cli.Options._setFull CedarBackup2.cli.Options-class.html#_setFull CedarBackup2.cli.Options.version CedarBackup2.cli.Options-class.html#version CedarBackup2.cli.Options._setManagedOnly CedarBackup2.cli.Options-class.html#_setManagedOnly CedarBackup2.cli.Options._setDiagnostics CedarBackup2.cli.Options-class.html#_setDiagnostics CedarBackup2.cli.Options._setConfig CedarBackup2.cli.Options-class.html#_setConfig CedarBackup2.tools.span.SpanOptions.validate CedarBackup2.tools.span.SpanOptions-class.html#validate CedarBackup2.cli.Options.logfile CedarBackup2.cli.Options-class.html#logfile CedarBackup2.cli.Options.buildArgumentString CedarBackup2.cli.Options-class.html#buildArgumentString CedarBackup2.cli.Options._setDebug CedarBackup2.cli.Options-class.html#_setDebug CedarBackup2.cli.Options._setManaged CedarBackup2.cli.Options-class.html#_setManaged CedarBackup2.cli.Options._setActions CedarBackup2.cli.Options-class.html#_setActions CedarBackup2.cli.Options._getHelp CedarBackup2.cli.Options-class.html#_getHelp CedarBackup2.cli.Options._getOwner CedarBackup2.cli.Options-class.html#_getOwner CedarBackup2.cli.Options._setLogfile CedarBackup2.cli.Options-class.html#_setLogfile CedarBackup2.cli.Options.quiet CedarBackup2.cli.Options-class.html#quiet CedarBackup2.cli.Options.mode CedarBackup2.cli.Options-class.html#mode CedarBackup2.cli.Options.diagnostics CedarBackup2.cli.Options-class.html#diagnostics CedarBackup2.cli.Options.debug CedarBackup2.cli.Options-class.html#debug CedarBackup2.cli.Options.output CedarBackup2.cli.Options-class.html#output CedarBackup2.cli.Options._setVerbose CedarBackup2.cli.Options-class.html#_setVerbose CedarBackup2.cli.Options._getOutput CedarBackup2.cli.Options-class.html#_getOutput CedarBackup2.cli.Options._getActions CedarBackup2.cli.Options-class.html#_getActions CedarBackup2.util.AbsolutePathList CedarBackup2.util.AbsolutePathList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.AbsolutePathList.append CedarBackup2.util.AbsolutePathList-class.html#append CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.AbsolutePathList.extend CedarBackup2.util.AbsolutePathList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.AbsolutePathList.insert CedarBackup2.util.AbsolutePathList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.Diagnostics CedarBackup2.util.Diagnostics-class.html CedarBackup2.util.Diagnostics._getEncoding CedarBackup2.util.Diagnostics-class.html#_getEncoding CedarBackup2.util.Diagnostics.encoding CedarBackup2.util.Diagnostics-class.html#encoding CedarBackup2.util.Diagnostics.locale CedarBackup2.util.Diagnostics-class.html#locale CedarBackup2.util.Diagnostics.__str__ CedarBackup2.util.Diagnostics-class.html#__str__ CedarBackup2.util.Diagnostics.getValues CedarBackup2.util.Diagnostics-class.html#getValues CedarBackup2.util.Diagnostics.interpreter CedarBackup2.util.Diagnostics-class.html#interpreter CedarBackup2.util.Diagnostics.__init__ CedarBackup2.util.Diagnostics-class.html#__init__ CedarBackup2.util.Diagnostics.platform CedarBackup2.util.Diagnostics-class.html#platform CedarBackup2.util.Diagnostics.version CedarBackup2.util.Diagnostics-class.html#version CedarBackup2.util.Diagnostics.printDiagnostics CedarBackup2.util.Diagnostics-class.html#printDiagnostics CedarBackup2.util.Diagnostics._getVersion CedarBackup2.util.Diagnostics-class.html#_getVersion CedarBackup2.util.Diagnostics._getTimestamp CedarBackup2.util.Diagnostics-class.html#_getTimestamp CedarBackup2.util.Diagnostics.timestamp CedarBackup2.util.Diagnostics-class.html#timestamp CedarBackup2.util.Diagnostics._getPlatform CedarBackup2.util.Diagnostics-class.html#_getPlatform CedarBackup2.util.Diagnostics.logDiagnostics CedarBackup2.util.Diagnostics-class.html#logDiagnostics CedarBackup2.util.Diagnostics._buildDiagnosticLines CedarBackup2.util.Diagnostics-class.html#_buildDiagnosticLines CedarBackup2.util.Diagnostics._getInterpreter CedarBackup2.util.Diagnostics-class.html#_getInterpreter CedarBackup2.util.Diagnostics._getMaxLength CedarBackup2.util.Diagnostics-class.html#_getMaxLength CedarBackup2.util.Diagnostics._getLocale CedarBackup2.util.Diagnostics-class.html#_getLocale CedarBackup2.util.Diagnostics.__repr__ CedarBackup2.util.Diagnostics-class.html#__repr__ CedarBackup2.util.DirectedGraph CedarBackup2.util.DirectedGraph-class.html CedarBackup2.util.DirectedGraph._DISCOVERED CedarBackup2.util.DirectedGraph-class.html#_DISCOVERED CedarBackup2.util.DirectedGraph.__str__ CedarBackup2.util.DirectedGraph-class.html#__str__ CedarBackup2.util.DirectedGraph.topologicalSort CedarBackup2.util.DirectedGraph-class.html#topologicalSort CedarBackup2.util.DirectedGraph._EXPLORED CedarBackup2.util.DirectedGraph-class.html#_EXPLORED CedarBackup2.util.DirectedGraph._getName CedarBackup2.util.DirectedGraph-class.html#_getName CedarBackup2.util.DirectedGraph.__init__ CedarBackup2.util.DirectedGraph-class.html#__init__ CedarBackup2.util.DirectedGraph.__cmp__ CedarBackup2.util.DirectedGraph-class.html#__cmp__ CedarBackup2.util.DirectedGraph._UNDISCOVERED CedarBackup2.util.DirectedGraph-class.html#_UNDISCOVERED CedarBackup2.util.DirectedGraph.createVertex CedarBackup2.util.DirectedGraph-class.html#createVertex CedarBackup2.util.DirectedGraph._topologicalSort CedarBackup2.util.DirectedGraph-class.html#_topologicalSort CedarBackup2.util.DirectedGraph.createEdge CedarBackup2.util.DirectedGraph-class.html#createEdge CedarBackup2.util.DirectedGraph.name CedarBackup2.util.DirectedGraph-class.html#name CedarBackup2.util.DirectedGraph.__repr__ CedarBackup2.util.DirectedGraph-class.html#__repr__ CedarBackup2.util.ObjectTypeList CedarBackup2.util.ObjectTypeList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.ObjectTypeList.append CedarBackup2.util.ObjectTypeList-class.html#append CedarBackup2.util.ObjectTypeList.__init__ CedarBackup2.util.ObjectTypeList-class.html#__init__ CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.ObjectTypeList.extend CedarBackup2.util.ObjectTypeList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.ObjectTypeList.insert CedarBackup2.util.ObjectTypeList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.PathResolverSingleton CedarBackup2.util.PathResolverSingleton-class.html CedarBackup2.util.PathResolverSingleton._Helper CedarBackup2.util.PathResolverSingleton._Helper-class.html CedarBackup2.util.PathResolverSingleton.getInstance CedarBackup2.util.PathResolverSingleton-class.html#getInstance CedarBackup2.util.PathResolverSingleton._instance CedarBackup2.util.PathResolverSingleton-class.html#_instance CedarBackup2.util.PathResolverSingleton.lookup CedarBackup2.util.PathResolverSingleton-class.html#lookup CedarBackup2.util.PathResolverSingleton._mapping CedarBackup2.util.PathResolverSingleton-class.html#_mapping CedarBackup2.util.PathResolverSingleton.__init__ CedarBackup2.util.PathResolverSingleton-class.html#__init__ CedarBackup2.util.PathResolverSingleton.fill CedarBackup2.util.PathResolverSingleton-class.html#fill CedarBackup2.util.PathResolverSingleton._Helper CedarBackup2.util.PathResolverSingleton._Helper-class.html CedarBackup2.util.PathResolverSingleton._Helper.__call__ CedarBackup2.util.PathResolverSingleton._Helper-class.html#__call__ CedarBackup2.util.PathResolverSingleton._Helper.__init__ CedarBackup2.util.PathResolverSingleton._Helper-class.html#__init__ CedarBackup2.util.Pipe CedarBackup2.util.Pipe-class.html CedarBackup2.util.Pipe.__init__ CedarBackup2.util.Pipe-class.html#__init__ CedarBackup2.util.RegexList CedarBackup2.util.RegexList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.RegexList.append CedarBackup2.util.RegexList-class.html#append CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.RegexList.extend CedarBackup2.util.RegexList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.RegexList.insert CedarBackup2.util.RegexList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.RegexMatchList CedarBackup2.util.RegexMatchList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.RegexMatchList.append CedarBackup2.util.RegexMatchList-class.html#append CedarBackup2.util.RegexMatchList.__init__ CedarBackup2.util.RegexMatchList-class.html#__init__ CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.RegexMatchList.extend CedarBackup2.util.RegexMatchList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.RegexMatchList.insert CedarBackup2.util.RegexMatchList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.RestrictedContentList CedarBackup2.util.RestrictedContentList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.RestrictedContentList.append CedarBackup2.util.RestrictedContentList-class.html#append CedarBackup2.util.RestrictedContentList.__init__ CedarBackup2.util.RestrictedContentList-class.html#__init__ CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.RestrictedContentList.extend CedarBackup2.util.RestrictedContentList-class.html#extend CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.RestrictedContentList.insert CedarBackup2.util.RestrictedContentList-class.html#insert CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util.UnorderedList CedarBackup2.util.UnorderedList-class.html CedarBackup2.util.UnorderedList.__lt__ CedarBackup2.util.UnorderedList-class.html#__lt__ CedarBackup2.util.UnorderedList.__ne__ CedarBackup2.util.UnorderedList-class.html#__ne__ CedarBackup2.util.UnorderedList.__gt__ CedarBackup2.util.UnorderedList-class.html#__gt__ CedarBackup2.util.UnorderedList.__eq__ CedarBackup2.util.UnorderedList-class.html#__eq__ CedarBackup2.util.UnorderedList.__le__ CedarBackup2.util.UnorderedList-class.html#__le__ CedarBackup2.util.UnorderedList.__ge__ CedarBackup2.util.UnorderedList-class.html#__ge__ CedarBackup2.util._Vertex CedarBackup2.util._Vertex-class.html CedarBackup2.util._Vertex.__init__ CedarBackup2.util._Vertex-class.html#__init__ CedarBackup2.writers.cdwriter.CdWriter CedarBackup2.writers.cdwriter.CdWriter-class.html CedarBackup2.writers.cdwriter.CdWriter._createImage CedarBackup2.writers.cdwriter.CdWriter-class.html#_createImage CedarBackup2.writers.cdwriter.CdWriter._calculateCapacity CedarBackup2.writers.cdwriter.CdWriter-class.html#_calculateCapacity CedarBackup2.writers.cdwriter.CdWriter._buildPropertiesArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildPropertiesArgs CedarBackup2.writers.cdwriter.CdWriter.writeImage CedarBackup2.writers.cdwriter.CdWriter-class.html#writeImage CedarBackup2.writers.cdwriter.CdWriter.deviceHasTray CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceHasTray CedarBackup2.writers.cdwriter.CdWriter.openTray CedarBackup2.writers.cdwriter.CdWriter-class.html#openTray CedarBackup2.writers.cdwriter.CdWriter.addImageEntry CedarBackup2.writers.cdwriter.CdWriter-class.html#addImageEntry CedarBackup2.writers.cdwriter.CdWriter._buildWriteArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildWriteArgs CedarBackup2.writers.cdwriter.CdWriter.unlockTray CedarBackup2.writers.cdwriter.CdWriter-class.html#unlockTray CedarBackup2.writers.cdwriter.CdWriter._parseBoundariesOutput CedarBackup2.writers.cdwriter.CdWriter-class.html#_parseBoundariesOutput CedarBackup2.writers.cdwriter.CdWriter._getHardwareId CedarBackup2.writers.cdwriter.CdWriter-class.html#_getHardwareId CedarBackup2.writers.cdwriter.CdWriter.refreshMedia CedarBackup2.writers.cdwriter.CdWriter-class.html#refreshMedia CedarBackup2.writers.cdwriter.CdWriter.closeTray CedarBackup2.writers.cdwriter.CdWriter-class.html#closeTray CedarBackup2.writers.cdwriter.CdWriter.initializeImage CedarBackup2.writers.cdwriter.CdWriter-class.html#initializeImage CedarBackup2.writers.cdwriter.CdWriter.deviceCanEject CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceCanEject CedarBackup2.writers.cdwriter.CdWriter.__init__ CedarBackup2.writers.cdwriter.CdWriter-class.html#__init__ CedarBackup2.writers.cdwriter.CdWriter.refreshMediaDelay CedarBackup2.writers.cdwriter.CdWriter-class.html#refreshMediaDelay CedarBackup2.writers.cdwriter.CdWriter._buildCloseTrayArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildCloseTrayArgs CedarBackup2.writers.cdwriter.CdWriter._getDeviceHasTray CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceHasTray CedarBackup2.writers.cdwriter.CdWriter.getEstimatedImageSize CedarBackup2.writers.cdwriter.CdWriter-class.html#getEstimatedImageSize CedarBackup2.writers.cdwriter.CdWriter.media CedarBackup2.writers.cdwriter.CdWriter-class.html#media CedarBackup2.writers.cdwriter.CdWriter._retrieveProperties CedarBackup2.writers.cdwriter.CdWriter-class.html#_retrieveProperties CedarBackup2.writers.cdwriter.CdWriter.deviceVendor CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceVendor CedarBackup2.writers.cdwriter.CdWriter.hardwareId CedarBackup2.writers.cdwriter.CdWriter-class.html#hardwareId CedarBackup2.writers.cdwriter.CdWriter._getDeviceCanEject CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceCanEject CedarBackup2.writers.cdwriter.CdWriter._getMedia CedarBackup2.writers.cdwriter.CdWriter-class.html#_getMedia CedarBackup2.writers.cdwriter.CdWriter.isRewritable CedarBackup2.writers.cdwriter.CdWriter-class.html#isRewritable CedarBackup2.writers.cdwriter.CdWriter.deviceType CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceType CedarBackup2.writers.cdwriter.CdWriter.setImageNewDisc CedarBackup2.writers.cdwriter.CdWriter-class.html#setImageNewDisc CedarBackup2.writers.cdwriter.CdWriter.driveSpeed CedarBackup2.writers.cdwriter.CdWriter-class.html#driveSpeed CedarBackup2.writers.cdwriter.CdWriter._getDevice CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDevice CedarBackup2.writers.cdwriter.CdWriter.deviceBufferSize CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceBufferSize CedarBackup2.writers.cdwriter.CdWriter._getDeviceType CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceType CedarBackup2.writers.cdwriter.CdWriter._getDeviceSupportsMulti CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceSupportsMulti CedarBackup2.writers.cdwriter.CdWriter._getScsiId CedarBackup2.writers.cdwriter.CdWriter-class.html#_getScsiId CedarBackup2.writers.cdwriter.CdWriter._buildBlankArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildBlankArgs CedarBackup2.writers.cdwriter.CdWriter._getDriveSpeed CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDriveSpeed CedarBackup2.writers.cdwriter.CdWriter._getDeviceVendor CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceVendor CedarBackup2.writers.cdwriter.CdWriter._writeImage CedarBackup2.writers.cdwriter.CdWriter-class.html#_writeImage CedarBackup2.writers.cdwriter.CdWriter.deviceId CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceId CedarBackup2.writers.cdwriter.CdWriter._blankMedia CedarBackup2.writers.cdwriter.CdWriter-class.html#_blankMedia CedarBackup2.writers.cdwriter.CdWriter._buildOpenTrayArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildOpenTrayArgs CedarBackup2.writers.cdwriter.CdWriter._getDeviceBufferSize CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceBufferSize CedarBackup2.writers.cdwriter.CdWriter.deviceSupportsMulti CedarBackup2.writers.cdwriter.CdWriter-class.html#deviceSupportsMulti CedarBackup2.writers.cdwriter.CdWriter._getEjectDelay CedarBackup2.writers.cdwriter.CdWriter-class.html#_getEjectDelay CedarBackup2.writers.cdwriter.CdWriter._getRefreshMediaDelay CedarBackup2.writers.cdwriter.CdWriter-class.html#_getRefreshMediaDelay CedarBackup2.writers.cdwriter.CdWriter.scsiId CedarBackup2.writers.cdwriter.CdWriter-class.html#scsiId CedarBackup2.writers.cdwriter.CdWriter._buildUnlockTrayArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildUnlockTrayArgs CedarBackup2.writers.cdwriter.CdWriter.device CedarBackup2.writers.cdwriter.CdWriter-class.html#device CedarBackup2.writers.cdwriter.CdWriter._getDeviceId CedarBackup2.writers.cdwriter.CdWriter-class.html#_getDeviceId CedarBackup2.writers.cdwriter.CdWriter.retrieveCapacity CedarBackup2.writers.cdwriter.CdWriter-class.html#retrieveCapacity CedarBackup2.writers.cdwriter.CdWriter._getBoundaries CedarBackup2.writers.cdwriter.CdWriter-class.html#_getBoundaries CedarBackup2.writers.cdwriter.CdWriter._buildBoundariesArgs CedarBackup2.writers.cdwriter.CdWriter-class.html#_buildBoundariesArgs CedarBackup2.writers.cdwriter.CdWriter._parsePropertiesOutput CedarBackup2.writers.cdwriter.CdWriter-class.html#_parsePropertiesOutput CedarBackup2.writers.cdwriter.CdWriter.ejectDelay CedarBackup2.writers.cdwriter.CdWriter-class.html#ejectDelay CedarBackup2.writers.cdwriter.MediaCapacity CedarBackup2.writers.cdwriter.MediaCapacity-class.html CedarBackup2.writers.cdwriter.MediaCapacity._getBytesUsed CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getBytesUsed CedarBackup2.writers.cdwriter.MediaCapacity.bytesUsed CedarBackup2.writers.cdwriter.MediaCapacity-class.html#bytesUsed CedarBackup2.writers.cdwriter.MediaCapacity.bytesAvailable CedarBackup2.writers.cdwriter.MediaCapacity-class.html#bytesAvailable CedarBackup2.writers.cdwriter.MediaCapacity.__str__ CedarBackup2.writers.cdwriter.MediaCapacity-class.html#__str__ CedarBackup2.writers.cdwriter.MediaCapacity.utilized CedarBackup2.writers.cdwriter.MediaCapacity-class.html#utilized CedarBackup2.writers.cdwriter.MediaCapacity.__init__ CedarBackup2.writers.cdwriter.MediaCapacity-class.html#__init__ CedarBackup2.writers.cdwriter.MediaCapacity._getTotalCapacity CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getTotalCapacity CedarBackup2.writers.cdwriter.MediaCapacity.boundaries CedarBackup2.writers.cdwriter.MediaCapacity-class.html#boundaries CedarBackup2.writers.cdwriter.MediaCapacity._getUtilized CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getUtilized CedarBackup2.writers.cdwriter.MediaCapacity._getBytesAvailable CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getBytesAvailable CedarBackup2.writers.cdwriter.MediaCapacity.totalCapacity CedarBackup2.writers.cdwriter.MediaCapacity-class.html#totalCapacity CedarBackup2.writers.cdwriter.MediaCapacity._getBoundaries CedarBackup2.writers.cdwriter.MediaCapacity-class.html#_getBoundaries CedarBackup2.writers.cdwriter.MediaDefinition CedarBackup2.writers.cdwriter.MediaDefinition-class.html CedarBackup2.writers.cdwriter.MediaDefinition.initialLeadIn CedarBackup2.writers.cdwriter.MediaDefinition-class.html#initialLeadIn CedarBackup2.writers.cdwriter.MediaDefinition.rewritable CedarBackup2.writers.cdwriter.MediaDefinition-class.html#rewritable CedarBackup2.writers.cdwriter.MediaDefinition.__init__ CedarBackup2.writers.cdwriter.MediaDefinition-class.html#__init__ CedarBackup2.writers.cdwriter.MediaDefinition.capacity CedarBackup2.writers.cdwriter.MediaDefinition-class.html#capacity CedarBackup2.writers.cdwriter.MediaDefinition.leadIn CedarBackup2.writers.cdwriter.MediaDefinition-class.html#leadIn CedarBackup2.writers.cdwriter.MediaDefinition.mediaType CedarBackup2.writers.cdwriter.MediaDefinition-class.html#mediaType CedarBackup2.writers.cdwriter.MediaDefinition._setValues CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_setValues CedarBackup2.writers.cdwriter.MediaDefinition._getMediaType CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getMediaType CedarBackup2.writers.cdwriter.MediaDefinition._getInitialLeadIn CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getInitialLeadIn CedarBackup2.writers.cdwriter.MediaDefinition._getLeadIn CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getLeadIn CedarBackup2.writers.cdwriter.MediaDefinition._getCapacity CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getCapacity CedarBackup2.writers.cdwriter.MediaDefinition._getRewritable CedarBackup2.writers.cdwriter.MediaDefinition-class.html#_getRewritable CedarBackup2.writers.cdwriter._ImageProperties CedarBackup2.writers.cdwriter._ImageProperties-class.html CedarBackup2.writers.cdwriter._ImageProperties.__init__ CedarBackup2.writers.cdwriter._ImageProperties-class.html#__init__ CedarBackup2.writers.dvdwriter.DvdWriter CedarBackup2.writers.dvdwriter.DvdWriter-class.html CedarBackup2.writers.dvdwriter.DvdWriter._buildWriteArgs CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_buildWriteArgs CedarBackup2.writers.dvdwriter.DvdWriter.refreshMedia CedarBackup2.writers.dvdwriter.DvdWriter-class.html#refreshMedia CedarBackup2.writers.dvdwriter.DvdWriter.writeImage CedarBackup2.writers.dvdwriter.DvdWriter-class.html#writeImage CedarBackup2.writers.dvdwriter.DvdWriter.deviceHasTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#deviceHasTray CedarBackup2.writers.dvdwriter.DvdWriter.openTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#openTray CedarBackup2.writers.dvdwriter.DvdWriter.addImageEntry CedarBackup2.writers.dvdwriter.DvdWriter-class.html#addImageEntry CedarBackup2.writers.dvdwriter.DvdWriter.unlockTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#unlockTray CedarBackup2.writers.dvdwriter.DvdWriter._getHardwareId CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getHardwareId CedarBackup2.writers.dvdwriter.DvdWriter.closeTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#closeTray CedarBackup2.writers.dvdwriter.DvdWriter.initializeImage CedarBackup2.writers.dvdwriter.DvdWriter-class.html#initializeImage CedarBackup2.writers.dvdwriter.DvdWriter.deviceCanEject CedarBackup2.writers.dvdwriter.DvdWriter-class.html#deviceCanEject CedarBackup2.writers.dvdwriter.DvdWriter.__init__ CedarBackup2.writers.dvdwriter.DvdWriter-class.html#__init__ CedarBackup2.writers.dvdwriter.DvdWriter.refreshMediaDelay CedarBackup2.writers.dvdwriter.DvdWriter-class.html#refreshMediaDelay CedarBackup2.writers.dvdwriter.DvdWriter._getDeviceHasTray CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getDeviceHasTray CedarBackup2.writers.dvdwriter.DvdWriter.getEstimatedImageSize CedarBackup2.writers.dvdwriter.DvdWriter-class.html#getEstimatedImageSize CedarBackup2.writers.dvdwriter.DvdWriter.media CedarBackup2.writers.dvdwriter.DvdWriter-class.html#media CedarBackup2.writers.dvdwriter.DvdWriter._parseSectorsUsed CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_parseSectorsUsed CedarBackup2.writers.dvdwriter.DvdWriter.hardwareId CedarBackup2.writers.dvdwriter.DvdWriter-class.html#hardwareId CedarBackup2.writers.dvdwriter.DvdWriter._getDeviceCanEject CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getDeviceCanEject CedarBackup2.writers.dvdwriter.DvdWriter._getMedia CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getMedia CedarBackup2.writers.dvdwriter.DvdWriter.isRewritable CedarBackup2.writers.dvdwriter.DvdWriter-class.html#isRewritable CedarBackup2.writers.dvdwriter.DvdWriter.setImageNewDisc CedarBackup2.writers.dvdwriter.DvdWriter-class.html#setImageNewDisc CedarBackup2.writers.dvdwriter.DvdWriter.driveSpeed CedarBackup2.writers.dvdwriter.DvdWriter-class.html#driveSpeed CedarBackup2.writers.dvdwriter.DvdWriter._getDevice CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getDevice CedarBackup2.writers.dvdwriter.DvdWriter._getScsiId CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getScsiId CedarBackup2.writers.dvdwriter.DvdWriter._getDriveSpeed CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getDriveSpeed CedarBackup2.writers.dvdwriter.DvdWriter._writeImage CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_writeImage CedarBackup2.writers.dvdwriter.DvdWriter.ejectDelay CedarBackup2.writers.dvdwriter.DvdWriter-class.html#ejectDelay CedarBackup2.writers.dvdwriter.DvdWriter._searchForOverburn CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_searchForOverburn CedarBackup2.writers.dvdwriter.DvdWriter.device CedarBackup2.writers.dvdwriter.DvdWriter-class.html#device CedarBackup2.writers.dvdwriter.DvdWriter._getEjectDelay CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getEjectDelay CedarBackup2.writers.dvdwriter.DvdWriter._getRefreshMediaDelay CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getRefreshMediaDelay CedarBackup2.writers.dvdwriter.DvdWriter.scsiId CedarBackup2.writers.dvdwriter.DvdWriter-class.html#scsiId CedarBackup2.writers.dvdwriter.DvdWriter.retrieveCapacity CedarBackup2.writers.dvdwriter.DvdWriter-class.html#retrieveCapacity CedarBackup2.writers.dvdwriter.DvdWriter._retrieveSectorsUsed CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_retrieveSectorsUsed CedarBackup2.writers.dvdwriter.DvdWriter._getEstimatedImageSize CedarBackup2.writers.dvdwriter.DvdWriter-class.html#_getEstimatedImageSize CedarBackup2.writers.dvdwriter.MediaCapacity CedarBackup2.writers.dvdwriter.MediaCapacity-class.html CedarBackup2.writers.dvdwriter.MediaCapacity._getBytesUsed CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#_getBytesUsed CedarBackup2.writers.dvdwriter.MediaCapacity.bytesUsed CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#bytesUsed CedarBackup2.writers.dvdwriter.MediaCapacity.bytesAvailable CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#bytesAvailable CedarBackup2.writers.dvdwriter.MediaCapacity.__str__ CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#__str__ CedarBackup2.writers.dvdwriter.MediaCapacity.utilized CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#utilized CedarBackup2.writers.dvdwriter.MediaCapacity.__init__ CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#__init__ CedarBackup2.writers.dvdwriter.MediaCapacity._getTotalCapacity CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#_getTotalCapacity CedarBackup2.writers.dvdwriter.MediaCapacity._getUtilized CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#_getUtilized CedarBackup2.writers.dvdwriter.MediaCapacity._getBytesAvailable CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#_getBytesAvailable CedarBackup2.writers.dvdwriter.MediaCapacity.totalCapacity CedarBackup2.writers.dvdwriter.MediaCapacity-class.html#totalCapacity CedarBackup2.writers.dvdwriter.MediaDefinition CedarBackup2.writers.dvdwriter.MediaDefinition-class.html CedarBackup2.writers.dvdwriter.MediaDefinition.capacity CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#capacity CedarBackup2.writers.dvdwriter.MediaDefinition.mediaType CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#mediaType CedarBackup2.writers.dvdwriter.MediaDefinition._setValues CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#_setValues CedarBackup2.writers.dvdwriter.MediaDefinition._getMediaType CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#_getMediaType CedarBackup2.writers.dvdwriter.MediaDefinition._getRewritable CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#_getRewritable CedarBackup2.writers.dvdwriter.MediaDefinition.rewritable CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#rewritable CedarBackup2.writers.dvdwriter.MediaDefinition.__init__ CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#__init__ CedarBackup2.writers.dvdwriter.MediaDefinition._getCapacity CedarBackup2.writers.dvdwriter.MediaDefinition-class.html#_getCapacity CedarBackup2.writers.dvdwriter._ImageProperties CedarBackup2.writers.dvdwriter._ImageProperties-class.html CedarBackup2.writers.dvdwriter._ImageProperties.__init__ CedarBackup2.writers.dvdwriter._ImageProperties-class.html#__init__ CedarBackup2.writers.util.IsoImage CedarBackup2.writers.util.IsoImage-class.html CedarBackup2.writers.util.IsoImage.preparerId CedarBackup2.writers.util.IsoImage-class.html#preparerId CedarBackup2.writers.util.IsoImage._buildWriteArgs CedarBackup2.writers.util.IsoImage-class.html#_buildWriteArgs CedarBackup2.writers.util.IsoImage.writeImage CedarBackup2.writers.util.IsoImage-class.html#writeImage CedarBackup2.writers.util.IsoImage._setVolumeId CedarBackup2.writers.util.IsoImage-class.html#_setVolumeId CedarBackup2.writers.util.IsoImage._setBiblioFile CedarBackup2.writers.util.IsoImage-class.html#_setBiblioFile CedarBackup2.writers.util.IsoImage._setDevice CedarBackup2.writers.util.IsoImage-class.html#_setDevice CedarBackup2.writers.util.IsoImage.getEstimatedSize CedarBackup2.writers.util.IsoImage-class.html#getEstimatedSize CedarBackup2.writers.util.IsoImage._getGraftPoint CedarBackup2.writers.util.IsoImage-class.html#_getGraftPoint CedarBackup2.writers.util.IsoImage._setUseRockRidge CedarBackup2.writers.util.IsoImage-class.html#_setUseRockRidge CedarBackup2.writers.util.IsoImage.addEntry CedarBackup2.writers.util.IsoImage-class.html#addEntry CedarBackup2.writers.util.IsoImage.graftPoint CedarBackup2.writers.util.IsoImage-class.html#graftPoint CedarBackup2.writers.util.IsoImage.applicationId CedarBackup2.writers.util.IsoImage-class.html#applicationId CedarBackup2.writers.util.IsoImage.__init__ CedarBackup2.writers.util.IsoImage-class.html#__init__ CedarBackup2.writers.util.IsoImage.biblioFile CedarBackup2.writers.util.IsoImage-class.html#biblioFile CedarBackup2.writers.util.IsoImage._buildGeneralArgs CedarBackup2.writers.util.IsoImage-class.html#_buildGeneralArgs CedarBackup2.writers.util.IsoImage._getUseRockRidge CedarBackup2.writers.util.IsoImage-class.html#_getUseRockRidge CedarBackup2.writers.util.IsoImage._getPublisherId CedarBackup2.writers.util.IsoImage-class.html#_getPublisherId CedarBackup2.writers.util.IsoImage._getEstimatedSize CedarBackup2.writers.util.IsoImage-class.html#_getEstimatedSize CedarBackup2.writers.util.IsoImage._setPreparerId CedarBackup2.writers.util.IsoImage-class.html#_setPreparerId CedarBackup2.writers.util.IsoImage.boundaries CedarBackup2.writers.util.IsoImage-class.html#boundaries CedarBackup2.writers.util.IsoImage._getDevice CedarBackup2.writers.util.IsoImage-class.html#_getDevice CedarBackup2.writers.util.IsoImage._getApplicationId CedarBackup2.writers.util.IsoImage-class.html#_getApplicationId CedarBackup2.writers.util.IsoImage._setBoundaries CedarBackup2.writers.util.IsoImage-class.html#_setBoundaries CedarBackup2.writers.util.IsoImage.volumeId CedarBackup2.writers.util.IsoImage-class.html#volumeId CedarBackup2.writers.util.IsoImage._buildDirEntries CedarBackup2.writers.util.IsoImage-class.html#_buildDirEntries CedarBackup2.writers.util.IsoImage._setPublisherId CedarBackup2.writers.util.IsoImage-class.html#_setPublisherId CedarBackup2.writers.util.IsoImage.device CedarBackup2.writers.util.IsoImage-class.html#device CedarBackup2.writers.util.IsoImage._setGraftPoint CedarBackup2.writers.util.IsoImage-class.html#_setGraftPoint CedarBackup2.writers.util.IsoImage._setApplicationId CedarBackup2.writers.util.IsoImage-class.html#_setApplicationId CedarBackup2.writers.util.IsoImage._buildSizeArgs CedarBackup2.writers.util.IsoImage-class.html#_buildSizeArgs CedarBackup2.writers.util.IsoImage._getVolumeId CedarBackup2.writers.util.IsoImage-class.html#_getVolumeId CedarBackup2.writers.util.IsoImage.publisherId CedarBackup2.writers.util.IsoImage-class.html#publisherId CedarBackup2.writers.util.IsoImage._getBoundaries CedarBackup2.writers.util.IsoImage-class.html#_getBoundaries CedarBackup2.writers.util.IsoImage._getPreparerId CedarBackup2.writers.util.IsoImage-class.html#_getPreparerId CedarBackup2.writers.util.IsoImage.useRockRidge CedarBackup2.writers.util.IsoImage-class.html#useRockRidge CedarBackup2.writers.util.IsoImage._getBiblioFile CedarBackup2.writers.util.IsoImage-class.html#_getBiblioFile CedarBackup2.xmlutil.Serializer CedarBackup2.xmlutil.Serializer-class.html CedarBackup2.xmlutil.Serializer._visitNodeList CedarBackup2.xmlutil.Serializer-class.html#_visitNodeList CedarBackup2.xmlutil.Serializer.serialize CedarBackup2.xmlutil.Serializer-class.html#serialize CedarBackup2.xmlutil.Serializer._visitEntityReference CedarBackup2.xmlutil.Serializer-class.html#_visitEntityReference CedarBackup2.xmlutil.Serializer._visitDocumentFragment CedarBackup2.xmlutil.Serializer-class.html#_visitDocumentFragment CedarBackup2.xmlutil.Serializer._visitElement CedarBackup2.xmlutil.Serializer-class.html#_visitElement CedarBackup2.xmlutil.Serializer.__init__ CedarBackup2.xmlutil.Serializer-class.html#__init__ CedarBackup2.xmlutil.Serializer._visitCDATASection CedarBackup2.xmlutil.Serializer-class.html#_visitCDATASection CedarBackup2.xmlutil.Serializer._visitDocumentType CedarBackup2.xmlutil.Serializer-class.html#_visitDocumentType CedarBackup2.xmlutil.Serializer._visitNamedNodeMap CedarBackup2.xmlutil.Serializer-class.html#_visitNamedNodeMap CedarBackup2.xmlutil.Serializer._visitAttr CedarBackup2.xmlutil.Serializer-class.html#_visitAttr CedarBackup2.xmlutil.Serializer._visitProlog CedarBackup2.xmlutil.Serializer-class.html#_visitProlog CedarBackup2.xmlutil.Serializer._tryIndent CedarBackup2.xmlutil.Serializer-class.html#_tryIndent CedarBackup2.xmlutil.Serializer._visitDocument CedarBackup2.xmlutil.Serializer-class.html#_visitDocument CedarBackup2.xmlutil.Serializer._visitNotation CedarBackup2.xmlutil.Serializer-class.html#_visitNotation CedarBackup2.xmlutil.Serializer._visitEntity CedarBackup2.xmlutil.Serializer-class.html#_visitEntity CedarBackup2.xmlutil.Serializer._write CedarBackup2.xmlutil.Serializer-class.html#_write CedarBackup2.xmlutil.Serializer._visitProcessingInstruction CedarBackup2.xmlutil.Serializer-class.html#_visitProcessingInstruction CedarBackup2.xmlutil.Serializer._visitComment CedarBackup2.xmlutil.Serializer-class.html#_visitComment CedarBackup2.xmlutil.Serializer._visit CedarBackup2.xmlutil.Serializer-class.html#_visit CedarBackup2.xmlutil.Serializer._visitText CedarBackup2.xmlutil.Serializer-class.html#_visitText CedarBackup2-2.26.5/doc/interface/CedarBackup2.writers.util-module.html0000664000175000017500000004267412642035643027356 0ustar pronovicpronovic00000000000000 CedarBackup2.writers.util
    Package CedarBackup2 :: Package writers :: Module util
    [hide private]
    [frames] | no frames]

    Module util

    source code

    Provides utilities related to image writers.


    Author: Kenneth J. Pronovici <pronovic@ieee.org>

    Classes [hide private]
      IsoImage
    Represents an ISO filesystem image.
    Functions [hide private]
     
    validateDevice(device, unittest=False)
    Validates a configured device.
    source code
     
    validateScsiId(scsiId)
    Validates a SCSI id string.
    source code
     
    validateDriveSpeed(driveSpeed)
    Validates a drive speed value.
    source code
     
    readMediaLabel(devicePath)
    Reads the media label (volume name) from the indicated device.
    source code
    Variables [hide private]
      logger = logging.getLogger("CedarBackup2.log.writers.util")
      MKISOFS_COMMAND = ['mkisofs']
      VOLNAME_COMMAND = ['volname']
      __package__ = 'CedarBackup2.writers'
    Function Details [hide private]

    validateDevice(device, unittest=False)

    source code 

    Validates a configured device. The device must be an absolute path, must exist, and must be writable. The unittest flag turns off validation of the device on disk.

    Parameters:
    • device - Filesystem device path.
    • unittest - Indicates whether we're unit testing.
    Returns:
    Device as a string, for instance "/dev/cdrw"
    Raises:
    • ValueError - If the device value is invalid.
    • ValueError - If some path cannot be encoded properly.

    validateScsiId(scsiId)

    source code 

    Validates a SCSI id string. SCSI id must be a string in the form [<method>:]scsibus,target,lun. For Mac OS X (Darwin), we also accept the form IO.*Services[/N].

    Parameters:
    • scsiId - SCSI id for the device.
    Returns:
    SCSI id as a string, for instance "ATA:1,0,0"
    Raises:
    • ValueError - If the SCSI id string is invalid.

    Note: For consistency, if None is passed in, None will be returned.

    validateDriveSpeed(driveSpeed)

    source code 

    Validates a drive speed value. Drive speed must be an integer which is >= 1.

    Parameters:
    • driveSpeed - Speed at which the drive writes.
    Returns:
    Drive speed as an integer
    Raises:
    • ValueError - If the drive speed value is invalid.

    Note: For consistency, if None is passed in, None will be returned.

    readMediaLabel(devicePath)

    source code 

    Reads the media label (volume name) from the indicated device. The volume name is read using the volname command.

    Parameters:
    • devicePath - Device path to read from
    Returns:
    Media label as a string, or None if there is no name or it could not be read.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.CollectConfig-class.html0000664000175000017500000014171112642035643030472 0ustar pronovicpronovic00000000000000 CedarBackup2.config.CollectConfig
    Package CedarBackup2 :: Module config :: Class CollectConfig
    [hide private]
    [frames] | no frames]

    Class CollectConfig

    source code

    object --+
             |
            CollectConfig
    

    Class representing a Cedar Backup collect configuration.

    The following restrictions exist on data in this class:

    • The target directory must be an absolute path.
    • The collect mode must be one of the values in VALID_COLLECT_MODES.
    • The archive mode must be one of the values in VALID_ARCHIVE_MODES.
    • The ignore file must be a non-empty string.
    • Each of the paths in absoluteExcludePaths must be an absolute path
    • The collect file list must be a list of CollectFile objects.
    • The collect directory list must be a list of CollectDir objects.

    For the absoluteExcludePaths list, validation is accomplished through the util.AbsolutePathList list implementation that overrides common list methods and transparently does the absolute path validation for us.

    For the collectFiles and collectDirs list, validation is accomplished through the util.ObjectTypeList list implementation that overrides common list methods and transparently ensures that each element has an appropriate type.


    Note: Lists within this class are "unordered" for equality comparisons.

    Instance Methods [hide private]
     
    __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, collectDirs=None)
    Constructor for the CollectConfig class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setTargetDir(self, value)
    Property target used to set the target directory.
    source code
     
    _getTargetDir(self)
    Property target used to get the target directory.
    source code
     
    _setCollectMode(self, value)
    Property target used to set the collect mode.
    source code
     
    _getCollectMode(self)
    Property target used to get the collect mode.
    source code
     
    _setArchiveMode(self, value)
    Property target used to set the archive mode.
    source code
     
    _getArchiveMode(self)
    Property target used to get the archive mode.
    source code
     
    _setIgnoreFile(self, value)
    Property target used to set the ignore file.
    source code
     
    _getIgnoreFile(self)
    Property target used to get the ignore file.
    source code
     
    _setAbsoluteExcludePaths(self, value)
    Property target used to set the absolute exclude paths list.
    source code
     
    _getAbsoluteExcludePaths(self)
    Property target used to get the absolute exclude paths list.
    source code
     
    _setExcludePatterns(self, value)
    Property target used to set the exclude patterns list.
    source code
     
    _getExcludePatterns(self)
    Property target used to get the exclude patterns list.
    source code
     
    _setCollectFiles(self, value)
    Property target used to set the collect files list.
    source code
     
    _getCollectFiles(self)
    Property target used to get the collect files list.
    source code
     
    _setCollectDirs(self, value)
    Property target used to set the collect dirs list.
    source code
     
    _getCollectDirs(self)
    Property target used to get the collect dirs list.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      targetDir
    Directory to collect files into.
      collectMode
    Default collect mode.
      archiveMode
    Default archive mode for collect files.
      ignoreFile
    Default ignore file name.
      absoluteExcludePaths
    List of absolute paths to exclude.
      excludePatterns
    List of regular expressions patterns to exclude.
      collectFiles
    List of collect files.
      collectDirs
    List of collect directories.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, targetDir=None, collectMode=None, archiveMode=None, ignoreFile=None, absoluteExcludePaths=None, excludePatterns=None, collectFiles=None, collectDirs=None)
    (Constructor)

    source code 

    Constructor for the CollectConfig class.

    Parameters:
    • targetDir - Directory to collect files into.
    • collectMode - Default collect mode.
    • archiveMode - Default archive mode for collect files.
    • ignoreFile - Default ignore file name.
    • absoluteExcludePaths - List of absolute paths to exclude.
    • excludePatterns - List of regular expression patterns to exclude.
    • collectFiles - List of collect files.
    • collectDirs - List of collect directories.
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setTargetDir(self, value)

    source code 

    Property target used to set the target directory. The value must be an absolute path if it is not None. It does not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.
    • ValueError - If the value cannot be encoded properly.

    _setCollectMode(self, value)

    source code 

    Property target used to set the collect mode. If not None, the mode must be one of VALID_COLLECT_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setArchiveMode(self, value)

    source code 

    Property target used to set the archive mode. If not None, the mode must be one of VALID_ARCHIVE_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setIgnoreFile(self, value)

    source code 

    Property target used to set the ignore file. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value cannot be encoded properly.

    _setAbsoluteExcludePaths(self, value)

    source code 

    Property target used to set the absolute exclude paths list. Either the value must be None or each element must be an absolute path. Elements do not have to exist on disk at the time of assignment.

    Raises:
    • ValueError - If the value is not an absolute path.

    _setCollectFiles(self, value)

    source code 

    Property target used to set the collect files list. Either the value must be None or each element must be a CollectFile.

    Raises:
    • ValueError - If the value is not a CollectFile

    _setCollectDirs(self, value)

    source code 

    Property target used to set the collect dirs list. Either the value must be None or each element must be a CollectDir.

    Raises:
    • ValueError - If the value is not a CollectDir

    Property Details [hide private]

    targetDir

    Directory to collect files into.

    Get Method:
    _getTargetDir(self) - Property target used to get the target directory.
    Set Method:
    _setTargetDir(self, value) - Property target used to set the target directory.

    collectMode

    Default collect mode.

    Get Method:
    _getCollectMode(self) - Property target used to get the collect mode.
    Set Method:
    _setCollectMode(self, value) - Property target used to set the collect mode.

    archiveMode

    Default archive mode for collect files.

    Get Method:
    _getArchiveMode(self) - Property target used to get the archive mode.
    Set Method:
    _setArchiveMode(self, value) - Property target used to set the archive mode.

    ignoreFile

    Default ignore file name.

    Get Method:
    _getIgnoreFile(self) - Property target used to get the ignore file.
    Set Method:
    _setIgnoreFile(self, value) - Property target used to set the ignore file.

    absoluteExcludePaths

    List of absolute paths to exclude.

    Get Method:
    _getAbsoluteExcludePaths(self) - Property target used to get the absolute exclude paths list.
    Set Method:
    _setAbsoluteExcludePaths(self, value) - Property target used to set the absolute exclude paths list.

    excludePatterns

    List of regular expressions patterns to exclude.

    Get Method:
    _getExcludePatterns(self) - Property target used to get the exclude patterns list.
    Set Method:
    _setExcludePatterns(self, value) - Property target used to set the exclude patterns list.

    collectFiles

    List of collect files.

    Get Method:
    _getCollectFiles(self) - Property target used to get the collect files list.
    Set Method:
    _setCollectFiles(self, value) - Property target used to set the collect files list.

    collectDirs

    List of collect directories.

    Get Method:
    _getCollectDirs(self) - Property target used to get the collect dirs list.
    Set Method:
    _setCollectDirs(self, value) - Property target used to set the collect dirs list.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.ByteQuantity-class.html0000664000175000017500000006440112642035643030421 0ustar pronovicpronovic00000000000000 CedarBackup2.config.ByteQuantity
    Package CedarBackup2 :: Module config :: Class ByteQuantity
    [hide private]
    [frames] | no frames]

    Class ByteQuantity

    source code

    object --+
             |
            ByteQuantity
    

    Class representing a byte quantity.

    A byte quantity has both a quantity and a byte-related unit. Units are maintained using the constants from util.py. If no units are provided, UNIT_BYTES is assumed.

    The quantity is maintained internally as a string so that issues of precision can be avoided. It really isn't possible to store a floating point number here while being able to losslessly translate back and forth between XML and object representations. (Perhaps the Python 2.4 Decimal class would have been an option, but I originally wanted to stay compatible with Python 2.3.)

    Even though the quantity is maintained as a string, the string must be in a valid floating point positive number. Technically, any floating point string format supported by Python is allowble. However, it does not make sense to have a negative quantity of bytes in this context.

    Instance Methods [hide private]
     
    __init__(self, quantity=None, units=None)
    Constructor for the ByteQuantity class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setQuantity(self, value)
    Property target used to set the quantity The value must be interpretable as a float if it is not None
    source code
     
    _getQuantity(self)
    Property target used to get the quantity.
    source code
     
    _setUnits(self, value)
    Property target used to set the units value.
    source code
     
    _getUnits(self)
    Property target used to get the units value.
    source code
     
    _getBytes(self)
    Property target used to return the byte quantity as a floating point number.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      quantity
    Byte quantity, as a string
      units
    Units for byte quantity, for instance UNIT_BYTES
      bytes
    Byte quantity, as a floating point number.

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, quantity=None, units=None)
    (Constructor)

    source code 

    Constructor for the ByteQuantity class.

    Parameters:
    • quantity - Quantity of bytes, something interpretable as a float
    • units - Unit of bytes, one of VALID_BYTE_UNITS
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class. Lists within this class are "unordered" for equality comparisons.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setQuantity(self, value)

    source code 

    Property target used to set the quantity The value must be interpretable as a float if it is not None

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value is not a valid floating point number
    • ValueError - If the value is less than zero

    _setUnits(self, value)

    source code 

    Property target used to set the units value. If not None, the units value must be one of the values in VALID_BYTE_UNITS.

    Raises:
    • ValueError - If the value is not valid.

    _getBytes(self)

    source code 

    Property target used to return the byte quantity as a floating point number. If there is no quantity set, then a value of 0.0 is returned.


    Property Details [hide private]

    quantity

    Byte quantity, as a string

    Get Method:
    _getQuantity(self) - Property target used to get the quantity.
    Set Method:
    _setQuantity(self, value) - Property target used to set the quantity The value must be interpretable as a float if it is not None

    units

    Units for byte quantity, for instance UNIT_BYTES

    Get Method:
    _getUnits(self) - Property target used to get the units value.
    Set Method:
    _setUnits(self, value) - Property target used to set the units value.

    bytes

    Byte quantity, as a floating point number.

    Get Method:
    _getBytes(self) - Property target used to return the byte quantity as a floating point number.

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.util.Pipe-class.html0000664000175000017500000002664412642035644026374 0ustar pronovicpronovic00000000000000 CedarBackup2.util.Pipe
    Package CedarBackup2 :: Module util :: Class Pipe
    [hide private]
    [frames] | no frames]

    Class Pipe

    source code

          object --+    
                   |    
    subprocess.Popen --+
                       |
                      Pipe
    

    Specialized pipe class for use by executeCommand.

    The executeCommand function needs a specialized way of interacting with a pipe. First, executeCommand only reads from the pipe, and never writes to it. Second, executeCommand needs a way to discard all output written to stderr, as a means of simulating the shell 2>/dev/null construct.

    Instance Methods [hide private]
     
    __init__(self, cmd, bufsize=-1, ignoreStderr=False)
    Create new Popen instance.
    source code

    Inherited from subprocess.Popen: __del__, communicate, kill, pipe_cloexec, poll, send_signal, terminate, wait

    Inherited from subprocess.Popen (private): _close_fds, _communicate, _communicate_with_poll, _communicate_with_select, _execute_child, _find_w9xpopen, _get_handles, _handle_exitstatus, _internal_poll, _make_inheritable, _readerthread, _set_cloexec_flag, _translate_newlines

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

    Class Variables [hide private]

    Inherited from subprocess.Popen (private): _child_created

    Properties [hide private]

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, cmd, bufsize=-1, ignoreStderr=False)
    (Constructor)

    source code 

    Create new Popen instance.

    Overrides: object.__init__
    (inherited documentation)

    CedarBackup2-2.26.5/doc/interface/CedarBackup2.config.BlankBehavior-class.html0000664000175000017500000005611612642035643030472 0ustar pronovicpronovic00000000000000 CedarBackup2.config.BlankBehavior
    Package CedarBackup2 :: Module config :: Class BlankBehavior
    [hide private]
    [frames] | no frames]

    Class BlankBehavior

    source code

    object --+
             |
            BlankBehavior
    

    Class representing optimized store-action media blanking behavior.

    The following restrictions exist on data in this class:

    • The blanking mode must be a one of the values in VALID_BLANK_MODES
    • The blanking factor must be a positive floating point number
    Instance Methods [hide private]
     
    __init__(self, blankMode=None, blankFactor=None)
    Constructor for the BlankBehavior class.
    source code
     
    __repr__(self)
    Official string representation for class instance.
    source code
     
    __str__(self)
    Informal string representation for class instance.
    source code
     
    __cmp__(self, other)
    Definition of equals operator for this class.
    source code
     
    _setBlankMode(self, value)
    Property target used to set the blanking mode.
    source code
     
    _getBlankMode(self)
    Property target used to get the blanking mode.
    source code
     
    _setBlankFactor(self, value)
    Property target used to set the blanking factor.
    source code
     
    _getBlankFactor(self)
    Property target used to get the blanking factor.
    source code

    Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Properties [hide private]
      blankMode
    Blanking mode
      blankFactor
    Blanking factor

    Inherited from object: __class__

    Method Details [hide private]

    __init__(self, blankMode=None, blankFactor=None)
    (Constructor)

    source code 

    Constructor for the BlankBehavior class.

    Parameters:
    • blankMode - Blanking mode
    • blankFactor - Blanking factor
    Raises:
    • ValueError - If one of the values is invalid.
    Overrides: object.__init__

    __repr__(self)
    (Representation operator)

    source code 

    Official string representation for class instance.

    Overrides: object.__repr__

    __str__(self)
    (Informal representation operator)

    source code 

    Informal string representation for class instance.

    Overrides: object.__str__

    __cmp__(self, other)
    (Comparison operator)

    source code 

    Definition of equals operator for this class.

    Parameters:
    • other - Other object to compare to.
    Returns:
    -1/0/1 depending on whether self is <, = or > other.

    _setBlankMode(self, value)

    source code 

    Property target used to set the blanking mode. The value must be one of VALID_BLANK_MODES.

    Raises:
    • ValueError - If the value is not valid.

    _setBlankFactor(self, value)

    source code 

    Property target used to set the blanking factor. The value must be a non-empty string if it is not None.

    Raises:
    • ValueError - If the value is an empty string.
    • ValueError - If the value is not a valid floating point number
    • ValueError - If the value is less than zero

    Property Details [hide private]

    blankMode

    Blanking mode

    Get Method:
    _getBlankMode(self) - Property target used to get the blanking mode.
    Set Method:
    _setBlankMode(self, value) - Property target used to set the blanking mode.

    blankFactor

    Blanking factor

    Get Method:
    _getBlankFactor(self) - Property target used to get the blanking factor.
    Set Method:
    _setBlankFactor(self, value) - Property target used to set the blanking factor.

    CedarBackup2-2.26.5/doc/release.txt0000664000175000017500000001147112555052642020516 0ustar pronovicpronovic00000000000000I am pleased to announce the release of Cedar Backup v2.0. This release has been more than a year in the works. During this time, the main focus was to clean up the codebase and the documentation, making the whole project easier to read, maintain, debug and enhance. Another major priority was validation, and the new implementation relies heavily on automated regression testing. Existing enhancement requests took a back seat to this cleanup effort, but are planned for future releases. The old v1.0 code tree will still be maintained for security support and major bug fixes, but all new development will take place on the v2.0 code tree. The new Debian package is called cedar-backup2 rather than cedar-backup. The old and new packages cannot be installed at the same time, but you can fall back to your existing cedar-backup package if you have problems with the new cedar-backup2 package. This should be considered a high-quality beta release. It has been through testing on my personal systems (all running various Debian releases), but could still harbour unknown bugs. If you have time, please report back to the cedar-backup-users mailing list about your experience with this new version, good or bad. DOWNLOAD Information about how to download Cedar Backup can be found on the Cedar Solutions website: http://cedar-solutions.com/software/cedar-backup Cedar Solutions provides binary packages for Debian 'sarge' and 'woody'; and source packages for other Linux platforms. DOCUMENTATION The newly-rewritten Cedar Backup Software Manual can be found on the Cedar Solutions website: Single-page HTML: http://cedar-solutions.com/cedar-backup/manual/manual.html Multiple-page HTML: http://cedar-solutions.com/cedar-backup/manual/index.html Portable Document Format (PDF): http://cedar-solutions.com/cedar-backup/manual/manual.pdf Plaintext: http://cedar-solutions.com/cedar-backup/manual/manual.txt Most users will want to look at the multiple-page HTML version. Users who wish to print the software manual should use the PDF version. MAJOR IMPROVEMENTS IN THIS RELEASE The v2.0 release represents a ground-up rewrite of the Cedar Backup codebase using Python 2.3. The following is a partial list of major changes, enhancements and improvements: - Code is better structured, with a sensible mix of classes and functions. - Documentation has been completely rewritten from scratch in DocBook Lite. - Unicode filenames are now natively supported without Python site changes. - The runtime 'validate' action now checks for many more config problems. - There are no longer any restrictions related to backups spanning midnite. - Most lower-level code is intended to be general-purpose "library" code. - Configuration is standardized in a common class, so 3rd parties can use it. - Collect and stage configuration now support various additional options. - Package now supports 3rd-party backup actions via an extension mechanism. - Most library code is thoroughly tested via pyunit (1700+ individual tests). - Code structure allows for easy addition of other backup types (i.e. DVD). - Code now uses Python's integrated logging module, resulting in realtime logs. - Collect action uses Python's tar module rather than shelling out to GNU tar. - Internal use of pipes should now be more robust and less prone to problems. USER-VISIBLE CHANGES IN THIS RELEASE Cedar Backup v2.0 requires Python 2.3 or better. Cedar Backup v1.0 only required Python 2.2. Cedar Backup configuration files that were valid for the v1.0 release should still be valid for the v2.0 release, with one exception: the tarz (.tar.Z) backup format is no longer supported. This because the Python tar module does not support this format. If there is sufficient interest, this backup format could be added again via shelling to an external compress program. The Cedar Backup command-line interface has changed slightly, but the changes should not present a problem for most users. In Cedar Backup v1.0, backup actions (collect, stage, store, purge) were specified on the command line with switches, i.e. --collect. This is not considered a good practice, so v2.0 instead accepts actions as plain arguments specified after all switches. For instance, the v1.0 command "cback --full --collect" is coverted to "cback --full collect" in v2.0. WHAT IS CEDAR BACKUP? Cedar Backup is a Python package that supports secure backups of files on local and remote hosts to CD-R or CD-RW media. The package is focused around weekly backups to a single disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, the script can write multisession discs, allowing you to add to a disc in a daily fashion. Directories are backed up using tar and may be compressed using gzip or bzip2. CedarBackup2-2.26.5/doc/manual/0002775000175000017500000000000012642035650017605 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/doc/manual/ch05s04.html0000664000175000017500000004450412642035647021574 0ustar pronovicpronovic00000000000000Setting up a Client Peer Node

    Setting up a Client Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Note

    See AppendixD, Securing Password-less SSH Connections for some important notes on how to optionally further secure password-less SSH connections to your clients.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure the master in your backup pool.

    You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client.

    To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub:

    user@machine> cat ~/.ssh/id_rsa.pub
    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69
    uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH
    HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine
             

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600.

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night).

    You should create a collect directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Use the command cback --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback collect
    30 06 * * * root  cback purge
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [23]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Client machine entries in the file, and change the lines so that the backup goes off when you want it to.

    CedarBackup2-2.26.5/doc/manual/ch05s02.html0000664000175000017500000034621612642035647021577 0ustar pronovicpronovic00000000000000Configuration File Format

    Configuration File Format

    Cedar Backup is configured through an XML [19] configuration file, usually called /etc/cback.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions.

    All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. [20] The extensions section is always optional and can be omitted unless extensions are in use.

    Note

    Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files Ken and ken might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ken will only match the file if it is actually on the filesystem with a lower-case k as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the Mac Mindset.

    Sample Configuration File

    Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes its sample in /usr/share/doc/cedar-backup2/examples/cback.conf.sample.

    This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections.

    <?xml version="1.0"?>
    <cb_config>
       <reference>
          <author>Kenneth J. Pronovici</author>
          <revision>1.3</revision>
          <description>Sample</description>
       </reference>
       <options>
          <starting_day>tuesday</starting_day>
          <working_dir>/opt/backup/tmp</working_dir>
          <backup_user>backup</backup_user>
          <backup_group>group</backup_group>
          <rcp_command>/usr/bin/scp -B</rcp_command>
       </options>
       <peers>
          <peer>
             <name>debian</name>
             <type>local</type>
             <collect_dir>/opt/backup/collect</collect_dir>
          </peer>
       </peers>
       <collect>
          <collect_dir>/opt/backup/collect</collect_dir>
          <collect_mode>daily</collect_mode>
          <archive_mode>targz</archive_mode>
          <ignore_file>.cbignore</ignore_file>
          <dir>
             <abs_path>/etc</abs_path>
             <collect_mode>incr</collect_mode>
          </dir>
          <file>
             <abs_path>/home/root/.profile</abs_path>
             <collect_mode>weekly</collect_mode>
          </file>
       </collect>
       <stage>
          <staging_dir>/opt/backup/staging</staging_dir>
       </stage>
       <store>
          <source_dir>/opt/backup/staging</source_dir>
          <media_type>cdrw-74</media_type>
          <device_type>cdwriter</device_type>
          <target_device>/dev/cdrw</target_device>
          <target_scsi_id>0,0,0</target_scsi_id>
          <drive_speed>4</drive_speed>
          <check_data>Y</check_data>
          <check_media>Y</check_media>
          <warn_midnite>Y</warn_midnite>
       </store>
       <purge>
          <dir>
             <abs_path>/opt/backup/stage</abs_path>
             <retain_days>7</retain_days>
          </dir>
          <dir>
             <abs_path>/opt/backup/collect</abs_path>
             <retain_days>0</retain_days>
          </dir>
       </purge>
    </cb_config>
             

    Reference Configuration

    The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired.

    This is an example reference configuration section:

    <reference>
       <author>Kenneth J. Pronovici</author>
       <revision>Revision 1.3</revision>
       <description>Sample</description>
       <generator>Yet to be Written Config Tool (tm)</description>
    </reference>
             

    The following elements are part of the reference configuration section:

    author

    Author of the configuration file.

    Restrictions: None

    revision

    Revision of the configuration file.

    Restrictions: None

    description

    Description of the configuration file.

    Restrictions: None

    generator

    Tool that generated the configuration file, if any.

    Restrictions: None

    Options Configuration

    The options configuration section contains configuration options that are not specific to any one action.

    This is an example options configuration section:

    <options>
       <starting_day>tuesday</starting_day>
       <working_dir>/opt/backup/tmp</working_dir>
       <backup_user>backup</backup_user>
       <backup_group>backup</backup_group>
       <rcp_command>/usr/bin/scp -B</rcp_command>
       <rsh_command>/usr/bin/ssh</rsh_command>
       <cback_command>/usr/bin/cback</cback_command>
       <managed_actions>collect, purge</managed_actions>
       <override>
          <command>cdrecord</command>
          <abs_path>/opt/local/bin/cdrecord</abs_path>
       </override>
       <override>
          <command>mkisofs</command>
          <abs_path>/opt/local/bin/mkisofs</abs_path>
       </override>
       <pre_action_hook>
          <action>collect</action>
          <command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command>
       </pre_action_hook>
       <post_action_hook>
          <action>collect</action>
          <command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command>
       </post_action_hook>
    </options>
             

    The following elements are part of the options configuration section:

    starting_day

    Day that starts the week.

    Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared.

    Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive.

    working_dir

    Working (temporary) directory to use for backups.

    This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups.

    The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master).

    Restrictions: Must be an absolute path

    backup_user

    Effective user that backups should run as.

    This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced).

    This value is also used as the default remote backup user for remote peers.

    Restrictions: Must be non-empty

    backup_group

    Effective group that backups should run as.

    This group must exist on the machine which is being configured, and should not be root or some other powerful group (although that restriction is not enforced).

    Restrictions: Must be non-empty

    rcp_command

    Default rcp-compatible copy command for staging.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway.

    Restrictions: Must be non-empty

    rsh_command

    Default rsh-compatible command to use for remote shells.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty

    cback_command

    Default cback-compatible command to use on managed remote clients.

    The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Default set of actions that are managed on remote clients.

    This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty.

    override

    Command to override with a customized path.

    This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    command

    Name of the command to be overridden, i.e. cdrecord.

    Restrictions: Must be a non-empty string.

    abs_path

    The absolute path where the overridden command can be found.

    Restrictions: Must be an absolute path.

    pre_action_hook

    Hook configuring a command to be executed before an action.

    This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    post_action_hook

    Hook configuring a command to be executed after an action.

    This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    Peers Configuration

    The peers configuration section contains a list of the peers managed by a master. This section is only required on a master.

    This is an example peers configuration section:

    <peers>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <ignore_failures>all</ignore_failures>
       </peer>
       <peer>
          <name>machine3</name>
          <type>remote</type>
          <managed>Y</managed>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <rcp_command>/usr/bin/scp</rcp_command>
          <rsh_command>/usr/bin/ssh</rsh_command>
          <cback_command>/usr/bin/cback</cback_command>
          <managed_actions>collect, purge</managed_actions>
       </peer>
    </peers>
             

    The following elements are part of the peers configuration section:

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer managed by a master.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    managed

    Indicates whether this peer is managed.

    A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    rsh_command

    The rsh-compatible command for this peer.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section.

    Restrictions: Must be non-empty

    cback_command

    The cback-compatible command for this peer.

    The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default cback command from the options section.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Set of actions that are managed for this peer.

    This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section.

    Restrictions: Must be non-empty.

    Collect Configuration

    The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up.

    In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed.

    This is an example collect configuration section:

    <collect>
       <collect_dir>/opt/backup/collect</collect_dir>
       <collect_mode>daily</collect_mode>
       <archive_mode>targz</archive_mode>
       <ignore_file>.cbignore</ignore_file>
       <exclude>
          <abs_path>/etc</abs_path>
          <pattern>.*\.conf</pattern>
       </exclude>
       <file>
          <abs_path>/home/root/.profile</abs_path>
       </file>
       <dir>
          <abs_path>/etc</abs_path>
       </dir>
       <dir>
          <abs_path>/var/log</abs_path>
          <collect_mode>incr</collect_mode>
       </dir>
       <dir>
          <abs_path>/opt</abs_path>
          <collect_mode>weekly</collect_mode>
          <exclude>
             <abs_path>/opt/large</abs_path>
             <rel_path>backup</rel_path>
             <pattern>.*tmp</pattern>
          </exclude>
       </dir>
    </collect>
             

    The following elements are part of the collect configuration section:

    collect_dir

    Directory to collect files into.

    On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory.

    This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form.

    Restrictions: Must be an absolute path

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Default archive mode for collect files.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Default ignore file name.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be non-empty

    recursion_level

    Recursion level to use when collecting directories.

    This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory.

    Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory.

    The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc.

    Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high.

    This field is optional. if it doesn't exist, the backup will use the default recursion level of zero.

    Restrictions: Must be an integer.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however.

    This section is optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    pattern

    A pattern to be recursively excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    file

    A file to be collected.

    This is a subsection which contains information about a specific file to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect file subsection contains the following fields:

    abs_path

    Absolute path of the file to collect.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this file

    The collect mode describes how frequently a file is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this file.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    dir

    A directory to be collected.

    This is a subsection which contains information about a specific directory to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to collect.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level.

    The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc.

    Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this directory

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this directory.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Ignore file name for this directory.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This field is optional. If it doesn't exist, the backup will use the default ignore file name.

    Restrictions: Must be non-empty

    link_depth

    Link depth value to use for this directory.

    The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc.

    This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed.

    Restrictions: If set, must be an integer ≥ 0.

    dereference

    Whether to dereference soft links.

    If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well.

    This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory.

    This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced.

    Restrictions: Must be a boolean (Y or N).

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    rel_path

    A relative path to be recursively excluded from the backup.

    The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/something/else.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    Stage Configuration

    The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to.

    This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging.

    This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
    </stage>
             

    This is an example stage configuration section that overrides the default list of peers:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
    </stage>
             

    The following elements are part of the stage configuration section:

    staging_dir

    Directory to stage files into.

    This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer daystrom backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself.

    This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space.

    Restrictions: Must be an absolute path

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    Store Configuration

    The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device.

    This is an example store configuration section:

    <store>
       <source_dir>/opt/backup/stage</source_dir>
       <media_type>cdrw-74</media_type>
       <device_type>cdwriter</device_type>
       <target_device>/dev/cdrw</target_device>
       <target_scsi_id>0,0,0</target_scsi_id>
       <drive_speed>4</drive_speed>
       <check_data>Y</check_data>
       <check_media>Y</check_media>
       <warn_midnite>Y</warn_midnite>
       <no_eject>N</no_eject>
       <refresh_media_delay>15</refresh_media_delay>
       <eject_delay>2</eject_delay>
       <blank_behavior>
          <mode>weekly</mode>
          <factor>1.3</factor>
       </blank_behavior>
    </store>
             

    The following elements are part of the store configuration section:

    source_dir

    Directory whose contents should be written to media.

    This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc.

    Restrictions: Must be an absolute path

    device_type

    Type of the device used to write the media.

    This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter).

    This field is optional. If it doesn't exist, the cdwriter device type is assumed.

    Restrictions: If set, must be either cdwriter or dvdwriter.

    media_type

    Type of the media in the device.

    Unless you want to throw away a backup disc every week, you are probably best off using rewritable media.

    You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see the section called “Media and Device Types” (in Chapter2, Basic Concepts).

    Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter.

    target_device

    Filesystem device name for writer device.

    This value is required for both CD writers and DVD writers.

    This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw.

    In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified.

    Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled.

    Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink.

    Restrictions: Must be an absolute path.

    target_scsi_id

    SCSI id for the writer device.

    This value is optional for CD writers and is ignored for DVD writers.

    If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord.

    Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord.

    For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form <method>:scsibus,target,lun.

    An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord).

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Restrictions: If set, must be a valid SCSI identifier.

    drive_speed

    Speed of the drive, i.e. 2 for a 2x device.

    This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed.

    For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media.

    Restrictions: If set, must be an integer ≥ 1.

    check_data

    Whether the media should be validated.

    This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch.

    Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    check_media

    Whether the media should be checked before writing to it.

    By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.)

    If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day.

    Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    no_eject

    Indicates that the writer device should not be ejected.

    Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session).

    For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer.

    Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    refresh_media_delay

    Number of seconds to delay after refreshing media

    This field is optional. If it doesn't exist, no delay will occur.

    Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds.

    Restrictions: If set, must be an integer ≥ 1.

    eject_delay

    Number of seconds to delay after ejecting the tray

    This field is optional. If it doesn't exist, no delay will occur.

    If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly — either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds.

    Restrictions: If set, must be an integer ≥ 1.

    blank_behavior

    Optimized blanking strategy.

    For more information about Cedar Backup's optimized blanking strategy, see the section called “Optimized Blanking Stategy”.

    This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor.

    blank_mode

    Blanking mode.

    Restrictions:Must be one of "daily" or "weekly".

    blank_factor

    Blanking factor.

    Restrictions:Must be a floating point number ≥ 0.

    Purge Configuration

    The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged.

    Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0).

    If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action.

    You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups.

    This is an example purge configuration section:

    <purge>
       <dir>
          <abs_path>/opt/backup/stage</abs_path>
          <retain_days>7</retain_days>
       </dir>
       <dir>
          <abs_path>/opt/backup/collect</abs_path>
          <retain_days>0</retain_days>
       </dir>
    </purge>
             

    The following elements are part of the purge configuration section:

    dir

    A directory to purge within.

    This is a subsection which contains information about a specific directory to purge within.

    This section can be repeated as many times as is necessary. At least one purge directory must be configured.

    The purge directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to purge within.

    The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than retain days days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files.

    Restrictions: Must be an absolute path.

    retain_days

    Number of days to retain old files.

    Once it has been more than this many days since a file was last modified, it is a candidate for removal.

    Restrictions: Must be an integer ≥ 0.

    Extensions Configuration

    The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional.

    Extensions configuration is used to specify extended actions implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions.

    Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400.

    Warning

    Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory.

    If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed — and you would get no warning about this in your email!

    So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the database command-line action. You have been told that this function is called foo.bar(). You think of this backup as a collect kind of action, so you want it to be performed immediately before the collect action.

    To configure this extension, you would list an action with a name database, a module foo, a function name bar and an index of 99.

    This is how the hypothetical action would be configured:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>99</index>
       </action>
    </extensions>
             

    The following elements are part of the extensions configuration section:

    action

    This is a subsection that contains configuration related to a single extended action.

    This section can be repeated as many times as is necessary.

    The action subsection contains the following fields:

    name

    Name of the extended action.

    Restrictions: Must be a non-empty string consisting of only lower-case letters and digits.

    module

    Name of the Python module associated with the extension function.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    function

    Name of the Python extension function within the module.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    index

    Index of action, for execution ordering.

    Restrictions: Must be an integer ≥ 0.

    CedarBackup2-2.26.5/doc/manual/ch01s04.html0000664000175000017500000001743012642035647021566 0ustar pronovicpronovic00000000000000History

    History

    Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain.

    In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead.

    Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. [3] At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code).

    Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) [4] and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release.

    Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code.

    In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, [5] and updated the code to use the newly-released Python logging package [6] after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code.

    So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result was the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. [7]

    The 3.0 release of Cedar Backup is a Python 3 conversion of the 2.0 release, with minimal additional functionality. The conversion from Python 2 to Python 3 started in mid-2015, about 5 years before the anticipated deprecation of Python 2 in 2020. Most users should consider transitioning to the 3.0 release.



    [4] Debian's stable releases are named after characters in the Toy Story movie.

    [5] Epydoc is a Python code documentation tool. See http://epydoc.sourceforge.net/.

    [7] Tests are implemented using Python's unit test framework. See http://docs.python.org/lib/module-unittest.html.

    CedarBackup2-2.26.5/doc/manual/ch02s04.html0000664000175000017500000005074112642035647021571 0ustar pronovicpronovic00000000000000The Backup Process

    The Backup Process

    The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control.

    This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See the section called “Coordination between Master and Clients” (later in this chapter) for more information on this subject.

    A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge.

    In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order.

    The cback command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below.

    See Chapter5, Configuration for more information on how a backup run is configured.

    The Collect Action

    The collect action is the first action in a standard backup run. It executes on both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2).

    There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up.

    Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file [9] or specify absolute paths or filename patterns [10] to be excluded. You can even configure a backup link farm rather than explicitly listing files and directories in configuration.

    This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a consolidation point to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action).

    The Stage Action

    The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name.

    For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer.

    Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh.

    If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running.

    Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc.

    Note

    Directories collected by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged.

    The Store Action

    The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful.

    If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the --full option is passed to the cback command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs.

    This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine.

    Warning

    The store action is not supported on the Mac OS X (darwin) platform. On that platform, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    The Purge Action

    The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged.

    Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration.

    The All Action

    The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line.

    Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. [11]

    The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions.

    The Validate Action

    The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line.

    The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.).

    The Initialize Action

    The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device.

    However, if the check media store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized.

    Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with CEDAR BACKUP).

    Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label).

    The Rebuild Action

    The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line.

    The rebuild action attempts to rebuild this week's disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason.

    To decide what data to write to disc again, the rebuild action looks back and finds the first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session.

    The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action.



    [9] Analagous to .cvsignore in CVS

    [10] In terms of Python regular expressions

    [11] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works.

    CedarBackup2-2.26.5/doc/manual/ch02s07.html0000664000175000017500000001362212642035647021571 0ustar pronovicpronovic00000000000000Media and Device Types

    Media and Device Types

    Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. [12]

    When using a new enough backup device, a new multisession ISO image [13] is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images — which is really unusual today — then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the daily backup mode to avoid losing data).

    Cedar Backup currently supports four different kinds of CD media:

    cdr-74

    74-minute non-rewritable CD media

    cdrw-74

    74-minute rewritable CD media

    cdr-80

    80-minute non-rewritable CD media

    cdrw-80

    80-minute rewritable CD media

    I have chosen to support just these four types of CD media because they seem to be the most standard of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable.

    Cedar Backup also supports two kinds of DVD media:

    dvd+r

    Single-layer non-rewritable DVD+R media

    dvd+rw

    Single-layer rewritable DVD+RW media

    The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type.



    [12] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVDRW drive.

    [13] An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a filesystem-within-a-file and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: http://en.wikipedia.org/wiki/ISO_image.

    CedarBackup2-2.26.5/doc/manual/ch03s02.html0000664000175000017500000001141612642035647021564 0ustar pronovicpronovic00000000000000Installing on a Debian System

    Installing on a Debian System

    The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude.

    If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian etch release is the first release to contain Cedar Backup 2.) Otherwise, you need to install from the Cedar Solutions APT data source. [15] To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file.

    After you have configured the proper APT data source, install Cedar Backup using this set of commands:

    $ apt-get update
    $ apt-get install cedar-backup2 cedar-backup2-doc
          

    Several of the Cedar Backup dependencies are listed as recommended rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them.

    If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source.

    In either case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    Note

    The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package.

    CedarBackup2-2.26.5/doc/manual/pr01s03.html0000664000175000017500000000641212642035647021612 0ustar pronovicpronovic00000000000000Conventions Used in This Book

    Conventions Used in This Book

    This section covers the various conventions used in this manual.

    Typographic Conventions

    Term

    Used for first use of important terms.

    Command

    Used for commands, command output, and switches

    Replaceable

    Used for replaceable items in code and text

    Filenames

    Used for file and directory names

    Icons

    Note

    This icon designates a note relating to the surrounding text.

    Tip

    This icon designates a helpful tip relating to the surrounding text.

    Warning

    This icon designates a warning relating to the surrounding text.

    CedarBackup2-2.26.5/doc/manual/ch06s04.html0000664000175000017500000002552712642035647021601 0ustar pronovicpronovic00000000000000MySQL Extension

    MySQL Extension

    The MySQL Extension is a Cedar Backup extension used to back up MySQL [26] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Note

    This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another.

    The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that all configured databases can be backed up by a single user. Often, the root database user will be used. An alternative is to create a separate MySQL backup user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice.

    Warning

    The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf:

    [mysqldump]
    user     = root
    password = <secret>
             

    Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead.

    As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server:

    [mysqldump]
    host = remote.host
             

    For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done.

    Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mysql</name>
          <module>CedarBackup2.extend.mysql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section:

    <mysql>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration:

    <mysql>
       <user>root</user>
       <password>password</password>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    The following elements are part of the MySQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user).

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    password

    Password associated with the database user.

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    CedarBackup2-2.26.5/doc/manual/ch06s09.html0000664000175000017500000001217712642035647021603 0ustar pronovicpronovic00000000000000Capacity Extension

    Capacity Extension

    The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused.

    This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> <action>
          <name>capacity</name>
          <module>CedarBackup2.extend.capacity</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full:

    <capacity>
       <max_percentage>95.5</max_percentage>
    </capacity>
          

    This example configures the extension to warn if the media has fewer than 16 MB free:

    <capacity>
       <min_bytes>16 MB</min_bytes>
    </capacity>
          

    The following elements are part of the Capacity configuration section:

    max_percentage

    Maximum percentage of the media that may be utilized.

    You must provide either this value or the min_bytes value.

    Restrictions: Must be a floating point number between 0.0 and 100.0

    min_bytes

    Minimum number of free bytes that must be available.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    You must provide either this value or the max_percentage value.

    Restrictions: Must be a byte quantity as described above.

    CedarBackup2-2.26.5/doc/manual/ch04s02.html0000664000175000017500000003216212642035647021566 0ustar pronovicpronovic00000000000000The cback command

    The cback command

    Introduction

    Cedar Backup's primary command-line interface is the cback command. It controls the entire backup process.

    Syntax

    The cback command has the following syntax:

     Usage: cback [switches] action(s)
    
     The following switches are accepted:
    
       -h, --help         Display this usage/help listing
       -V, --version      Display version information
       -b, --verbose      Print verbose output as well as logging to disk
       -q, --quiet        Run quietly (display no output to the screen)
       -c, --config       Path to config file (default: /etc/cback.conf)
       -f, --full         Perform a full backup, regardless of configuration
       -M, --managed      Include managed clients when executing actions
       -N, --managed-only Include ONLY managed clients when executing actions
       -l, --logfile      Path to logfile (default: /var/log/cback.log)
       -o, --owner        Logfile ownership, user:group (default: root:adm)
       -m, --mode         Octal logfile permissions mode (default: 640)
       -O, --output       Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug        Write debugging information to the log (implies --output)
       -s, --stack        Dump a Python stack trace instead of swallowing exceptions
       -D, --diagnostics  Print runtime diagnostics to the screen and exit
    
     The following actions may be specified:
    
       all                Take all normal actions (collect, stage, store, purge)
       collect            Take the collect action
       stage              Take the stage action
       store              Take the store action
       purge              Take the purge action
       rebuild            Rebuild "this week's" disc if possible
       validate           Validate configuration only
       initialize         Initialize media for use with Cedar Backup
    
     You may also specify extended actions that have been defined in
     configuration.
    
     You must specify at least one action to take.  More than one of
     the "collect", "stage", "store" or "purge" actions and/or
     extended actions may be specified in any arbitrary order; they
     will be executed in a sensible order.  The "all", "rebuild",
     "validate", and "initialize" actions may not be combined with
     other actions.
             

    Note that the all action only executes the standard four actions. It never executes any of the configured extensions. [18]

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf.

    -f, --full

    Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started.

    -M, --managed

    Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally.

    -N, --managed-only

    Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client — but do not execute the action locally.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    Actions

    You can find more information about the various actions in the section called “The Backup Process” (in Chapter2, Basic Concepts). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions).

    If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however.



    [18] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. Better to be definitive than confusing.

    CedarBackup2-2.26.5/doc/manual/ch02s03.html0000664000175000017500000000471612642035647021571 0ustar pronovicpronovic00000000000000Cedar Backup Pools

    Cedar Backup Pools

    There are two kinds of machines in a Cedar Backup pool. One machine (the master) has a CD or DVD writer on it and writes the backup to disc. The others (clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines.

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way.

    CedarBackup2-2.26.5/doc/manual/ch05s07.html0000664000175000017500000001311412642035647021570 0ustar pronovicpronovic00000000000000Optimized Blanking Stategy

    Optimized Blanking Stategy

    When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period.

    Since rewritable media can be blanked only a finite number of times before becoming unusable, some users — especially users of rewritable DVD media with its large capacity — may prefer to blank the media less often.

    If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked.

    This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected).

    There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data.

    If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup.

    If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true:

    bytes available / (1 + bytes required) ≤ blanking factor
          

    Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate:

    Total size of weekly backup / Full backup size at the start of the week
          

    This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week:

    /opt/backup/staging# du -s 2007/03/*
    3040    2007/03/01
    3044    2007/03/02
    6812    2007/03/03
    3044    2007/03/04
    3152    2007/03/05
    3056    2007/03/06
    3060    2007/03/07
    3056    2007/03/08
    4776    2007/03/09
    6812    2007/03/10
    11824   2007/03/11
          

    In this case, the ratio is approximately 4:

    6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571
          

    To be safe, you might choose to configure a factor of 5.0.

    Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary.

    If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used.

    CedarBackup2-2.26.5/doc/manual/apa.html0000664000175000017500000002172112642035647021243 0ustar pronovicpronovic00000000000000AppendixA.Extension Architecture Interface

    AppendixA.Extension Architecture Interface

    The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension.

    You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file.

    There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>101</index>
       </action> 
    </extensions>
          

    In this case, the action database has been mapped to the extension function foo.bar().

    Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules:

    1. Extensions may not write to stdout or stderr using functions such as print or sys.write.

    2. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup2.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled.

    3. Any time an extension invokes a command-line utility, it must be done through the CedarBackup2.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output.

    4. Extensions may not return any value.

    5. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message.

    6. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation.

    7. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types.

    8. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration.

    Extension functions take three arguments: the path to configuration on disk, a CedarBackup2.cli.Options object representing the command-line options in effect, and a CedarBackup2.config.Config object representing parsed standard configuration.

    def function(configPath, options, config):
       """Sample extension function."""
       pass
          

    This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed.

    The interface to the CedarBackup2.cli.Options and CedarBackup2.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3).

    If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions.

    For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this:

    <database>
       <repository>/path/to/repo1</repository>
       <repository>/path/to/repo2</repository>
    </database>
          

    In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality.

    CedarBackup2-2.26.5/doc/manual/ch06s06.html0000664000175000017500000003650012642035647021574 0ustar pronovicpronovic00000000000000Mbox Extension

    Mbox Extension

    The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style mbox mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders.

    What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space.

    Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mbox</name>
          <module>CedarBackup2.extend.mbox</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section:

    <mbox>
       <collect_mode>incr</collect_mode>
       <compress_mode>gzip</compress_mode>
       <file>
          <abs_path>/home/user1/mail/greylist</abs_path>
          <collect_mode>daily</collect_mode>
       </file>
       <dir>
          <abs_path>/home/user2/mail</abs_path>
       </dir>
       <dir>
          <abs_path>/home/user3/mail</abs_path>
          <exclude>
             <rel_path>spam</rel_path>
             <pattern>.*debian.*</pattern>
          </exclude>
       </dir>
    </mbox>
          

    Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively.

    Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed — only relative path exclusions and patterns.

    The following elements are part of the mbox configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    file

    An individual mbox file to be collected.

    This is a subsection which contains information about an individual mbox file to be backed up.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The file subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox file to back up.

    Restrictions: Must be an absolute path.

    dir

    An mbox directory to be collected.

    This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The dir subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox directory to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/user2/mail/SPAM.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    CedarBackup2-2.26.5/doc/manual/pr01s04.html0000664000175000017500000001260712642035647021616 0ustar pronovicpronovic00000000000000Organization of This Manual

    Organization of This Manual

    Chapter1, Introduction

    Provides some some general history about Cedar Backup, what needs it is intended to meet, how to get support, and how to migrate from version 2 to version 3.

    Chapter2, Basic Concepts

    Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual.

    Chapter3, Installation

    Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package.

    Chapter4, Command Line Tools

    Discusses the various Cedar Backup command-line tools, including the primary cback command.

    Chapter5, Configuration

    Provides detailed information about how to configure Cedar Backup.

    Chapter6, Official Extensions

    Describes each of the officially-supported Cedar Backup extensions.

    AppendixA, Extension Architecture Interface

    Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup.

    AppendixB, Dependencies

    Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems.

    AppendixC, Data Recovery

    Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from.

    AppendixD, Securing Password-less SSH Connections

    Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised.

    CedarBackup2-2.26.5/doc/manual/apd.html0000664000175000017500000002535212642035647021252 0ustar pronovicpronovic00000000000000AppendixD.Securing Password-less SSH Connections

    AppendixD.Securing Password-less SSH Connections

    Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients.

    Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers.

    Traditionally, Cedar Backup has relied on a segmenting strategy to minimize the risk. Although the backup typically runs as root — so that all parts of the filesystem can be backed up — we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections.

    With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user.

    Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy — they simply may not have a way to create a login which is only used for backups.

    So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a filter in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd:

    command="command"
       Specifies that the command is executed whenever this key is used for
       authentication.  The command supplied by the user (if any) is ignored.  The
       command is run on a pty if the client requests a pty; otherwise it is run
       without a tty.  If an 8-bit clean channel is required, one must not request
       a pty or should specify no-pty.  A quote may be included in the command by
       quoting it with a backslash.  This option might be useful to restrict
       certain public keys to perform just a specific operation.  An example might
       be a key that permits remote backups but nothing else.  Note that the client
       may specify TCP and/or X11 forwarding unless they are explicitly prohibited.
       Note that this option applies to shell, command or subsystem execution.
          

    Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer.

    So, let's imagine that we have two hosts: master mickey, and peer minnie. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file):

    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km
    =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9=
    1-2341=-a0sd=-sa0=1z= backup@mickey
          

    This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie.

    To put the filter in place, we add a command option to the key, like this:

    command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp
    3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F
    tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey
          

    Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to.

    A very basic validate-backup script might look something like this:

    #!/bin/bash
    if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then
        ${SSH_ORIGINAL_COMMAND}
    else
       echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]."
       exit 1
    fi
          

    This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed.

    For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master).

    If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this:

    Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile
    OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006
    debug1: Reading configuration data /home/backup/.ssh/config
    debug1: Applying options for daystrom
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: Applying options for *
    debug2: ssh_connect: needpriv 0
          

    Omit the -v and you have your command: scp -f .profile.

    For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer:

    scp -f /path/to/collect/cback.collect
    scp -f /path/to/collect/*
    scp -t /path/to/collect/cback.stage
          

    If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action:

    /usr/bin/cback --full collect
    /usr/bin/cback collect
          

    Of course, you would have to list the actual path to the cback executable — exactly the one listed in the <cback_command> configuration option for your managed peer.

    I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions.

    CedarBackup2-2.26.5/doc/manual/styles.css0000664000175000017500000000664712642035647021663 0ustar pronovicpronovic00000000000000/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * C E D A R * S O L U T I O N S "Software done right." * S O F T W A R E * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Author : Kenneth J. Pronovici * Language : XSLT * Project : Cedar Backup, release 2 * Purpose : Custom stylesheet applied to user manual in HTML form. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ /* This stylesheet was originally taken from the Subversion project's book (http://svnbook.red-bean.com/). I have not made any modifications to the sheet for use with Cedar Backup. The original stylesheet was (c) 2000-2004 CollabNet (see CREDITS). */ BODY { background: white; margin: 0.5in; font-family: arial,helvetica,sans-serif; } H1.title { font-size: 250%; font-style: normal; font-weight: bold; color: black; } H2.subtitle { font-size: 150%; font-style: italic; color: black; } H2.title { font-size: 150%; font-style: normal; font-weight: bold; color: black; } H3.title { font-size: 125%; font-style: normal; font-weight: bold; color: black; } H4.title { font-size: 100%; font-style: normal; font-weight: bold; color: black; } .toc B { font-size: 125%; font-style: normal; font-weight: bold; color: black; } P,LI,UL,OL,DD,DT { font-style: normal; font-weight: normal; color: black; } TT,PRE { font-family: courier new,courier,fixed; } .command, .screen, .programlisting { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; } .filename { font-family: arial,helvetica,sans-serif; font-style: italic; } A { color: blue; text-decoration: underline; } A:hover { background: rgb(75%,75%,100%); color: blue; text-decoration: underline; } A:visited { color: purple; text-decoration: underline; } IMG { border: none; } .figure, .example, .table { margin: 0.125in 0.5in; } .table TABLE { border: 1px rgb(180,180,200) solid; border-spacing: 0px; } .table TD { border: 1px rgb(180,180,200) solid; } .table TH { background: rgb(180,180,200); border: 1px rgb(180,180,200) solid; } .table P.title, .figure P.title, .example P.title { text-align: left !important; font-size: 100% !important; } .author { font-size: 100%; font-style: italic; font-weight: normal; color: black; } .sidebar { border: 2px black solid; background: rgb(230,230,235); padding: 0.12in; margin: 0 0.5in; } .sidebar P.title { text-align: center; font-size: 125%; } .tip { border: black solid 1px; background: url(./images/info.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .warning { border: black solid 1px; background: url(./images/warning.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .note { border: black solid 1px; background: url(./images/note.png) no-repeat; margin: 0.12in 0; padding: 0 55px; } .programlisting, .screen { font-family: courier new,courier,fixed; font-style: normal; font-weight: normal; font-size: 90%; color: black; margin: 0 0.5in; } .navheader, .navfooter { border: black solid 1px; background: rgb(180,180,200); } .navheader HR, .navfooter HR { display: none; } CedarBackup2-2.26.5/doc/manual/apb.html0000664000175000017500000004137612642035647021254 0ustar pronovicpronovic00000000000000AppendixB.Dependencies

    AppendixB.Dependencies

    Python 2.7

    Cedar Backup is written in Python 2 and requires version 2.7 or greater of the language. Python 2.7 was originally released on 4 Jul 2010, and is the last supported release of Python 2. As of this writing, all current Linux and BSD distributions include it.

    If you can't find a package for your system, install from the package source, using the upstream link.

    RSH Server and Client

    Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic rsh client), most users should only use an SSH (secure shell) server and client.

    The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server.

    If you can't find SSH client or server packages for your system, install from the package source, using the upstream link.

    mkisofs

    The mkisofs command is used create ISO filesystem images that can later be written to backup media.

    On Debian platforms, mkisofs is not distributed and genisoimage is used instead. The Debian package takes care of this for you.

    If you can't find a package for your system, install from the package source, using the upstream link.

    cdrecord

    The cdrecord command is used to write ISO images to CD media in a backup device.

    On Debian platforms, cdrecord is not distributed and wodim is used instead. The Debian package takes care of this for you.

    If you can't find a package for your system, install from the package source, using the upstream link.

    dvd+rw-tools

    The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    eject and volname

    The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc.

    The volname command is used to determine the volume name of media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    mount and umount

    The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check.

    If you can't find a package for your system, install from the package source, using the upstream link.

    grepmail

    The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders.

    If you can't find a package for your system, install from the package source, using the upstream link.

    gpg

    The gpg command is used by the encrypt extension to encrypt files.

    If you can't find a package for your system, install from the package source, using the upstream link.

    split

    The split command is used by the split extension to split up large files.

    This command is typically part of the core operating system install and is not distributed in a separate package.

    AWS CLI

    AWS CLI is Amazon's official command-line tool for interacting with the Amazon Web Services infrastruture. Cedar Backup uses AWS CLI to copy backup data up to Amazon S3 cloud storage.

    After you install AWS CLI, you need to configure your connection to AWS with an appropriate access id and access key. Amazon provides a good setup guide.

    The initial implementation of the amazons3 extension was written using AWS CLI 1.4. As of this writing, not all Linux distributions include a package for this version. On these platforms, the easiest way to install it is via PIP: apt-get install python-pip, and then pip install awscli. The Debian package includes an appropriate dependency starting with the jessie release.

    Chardet

    The cback-amazons3-sync command relies on the Chardet python package to check filename encoding. You only need this package if you are going to use the sync tool.

    CedarBackup2-2.26.5/doc/manual/ch05s05.html0000664000175000017500000005533712642035647021603 0ustar pronovicpronovic00000000000000Setting up a Master Peer Node

    Setting up a Master Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge.

    Note

    Note that the master can treat itself as a client peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master.

    Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a consolidation point machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test connectivity to client machines.

    This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client.

    Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine.

    If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients.

    Step 9: Test your backup.

    Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.)

    When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read.

    You may also want to run cback purge on the master and each client once you have finished validating that everything worked.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [22] To be safe, always enable the consistency check option in the store configuration section.

    Step 10: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback collect
    30 02 * * * root  cback stage
    30 04 * * * root  cback store
    30 06 * * * root  cback purge
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [23]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Master machine entries in the file, and change the lines so that the backup goes off when you want it to.

    CedarBackup2-2.26.5/doc/manual/ch06s08.html0000664000175000017500000001443412642035647021600 0ustar pronovicpronovic00000000000000Split Extension

    Split Extension

    The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback-span command, which requires individual files within staging directories to each be smaller than a single disc.

    You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback-span.

    The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats.

    Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback-span might put an indivdual file on any disc in a set — the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> 
       <action>
          <name>split</name>
          <module>CedarBackup2.extend.split</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section:

    <split>
       <size_limit>250 MB</size_limit>
       <split_size>100 MB</split_size>
    </split>
          

    The following elements are part of the Split configuration section:

    size_limit

    Size limit.

    Files with a size strictly larger than this limit will be split by the extension.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    split_size

    Split size.

    This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    CedarBackup2-2.26.5/doc/manual/ch06s07.html0000664000175000017500000001717712642035647021606 0ustar pronovicpronovic00000000000000Encrypt Extension

    Encrypt Extension

    The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc.

    There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced.

    Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL.

    Warning

    If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe — someplace other than on your backup disc. If you lose your secret key, your backup will be useless.

    I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc.

    Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.)

    An encrypted backup has the same file structure as a normal backup, so all of the instructions in AppendixC, Data Recovery apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual.

    Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/manual.html and gain an understanding of how encryption can help you or hurt you.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>encrypt</name>
          <module>CedarBackup2.extend.encrypt</module>
          <function>executeAction</function>
          <index>301</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section:

    <encrypt>
       <encrypt_mode>gpg</encrypt_mode>
       <encrypt_target>Backup User</encrypt_target>
    </encrypt>
          

    The following elements are part of the Encrypt configuration section:

    encrypt_mode

    Encryption mode.

    This value specifies which encryption mechanism will be used by the extension.

    Currently, only the GPG public-key encryption mechanism is supported.

    Restrictions: Must be gpg.

    encrypt_target

    Encryption target.

    The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r.

    CedarBackup2-2.26.5/doc/manual/ch02s06.html0000664000175000017500000000667512642035647021602 0ustar pronovicpronovic00000000000000Managed Backups

    Managed Backups

    Cedar Backup also supports an optional feature called the managed backup. This feature is intended for use with remote clients where cron is not available.

    When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell.

    To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients.

    Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time.

    However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature.

    CedarBackup2-2.26.5/doc/manual/ch05.html0000664000175000017500000002525412642035647021246 0ustar pronovicpronovic00000000000000Chapter5.Configuration

    Chapter5.Configuration

    Table of Contents

    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy

    Overview

    Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy.

    First, familiarize yourself with the concepts in Chapter2, Basic Concepts. In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in Chapter3, Installation.

    Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over the section called “The cback command” (in Chapter4, Command Line Tools) to become familiar with the command line interface. Then, look over the section called “Configuration File Format” (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback.conf) or in some other location.

    After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done.

    CedarBackup2-2.26.5/doc/manual/ch01.html0000664000175000017500000001312012642035647021227 0ustar pronovicpronovic00000000000000Chapter1.Introduction

    Chapter1.Introduction

    Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.— Linus Torvalds, at the release of Linux 2.0.8 in July of 1996.

    What is Cedar Backup?

    Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources.

    Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis.

    Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media.

    Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 2 programming language.

    There are many different backup software implementations out there in the open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data on a regular basis. Cedar Backup isn't for you if you want to back up your huge MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set of machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, Subversion or Mercurial repositories, or small MySQL databases, then Cedar Backup is probably worth your time.

    Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 2, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems.

    To run a Cedar Backup client, you really just need a working Python 2 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images or talking to the Amazon S3 infrastructure. A full list of dependencies is provided in the section called “Installing Dependencies”.

    CedarBackup2-2.26.5/doc/manual/ch06s02.html0000664000175000017500000002700712642035647021572 0ustar pronovicpronovic00000000000000Amazon S3 Extension

    Amazon S3 Extension

    The Amazon S3 extension writes data to Amazon S3 cloud storage rather than to physical media. It is intended to replace the store action, but you can also use it alongside the store action if you'd prefer to backup your data in more than one place. This extension must be run after the stage action.

    The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to run the aws program. So, make sure you configure the AWS CLI tools as the backup user and not root. (This is different than the amazons3 sync tool extension, which executes AWS CLI command as the same user that is running the tool.)

    When using physical media via the standard store action, there is an implicit limit to the size of a backup, since a backup must fit on a single disc. Since there is no physical media, no such limit exists for Amazon S3 backups. This leaves open the possibility that Cedar Backup might construct an unexpectedly-large backup that the administrator is not aware of. Over time, this might become expensive, either in terms of network bandwidth or in terms of Amazon S3 storage and I/O charges. To mitigate this risk, set a reasonable maximum size using the configuration elements shown below. If the backup fails, you have a chance to review what made the backup larger than you expected, and you can either correct the problem (i.e. remove a large temporary directory that got inadvertently included in the backup) or change configuration to take into account the new "normal" maximum size.

    You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and ${output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user.

    For instance, you can use something like this with GPG:

    /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
          

    The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.:

    dd if=/dev/urandom count=20 bs=1 | xxd -ps
          

    (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>amazons3</name>
          <module>CedarBackup2.extend.amazons3</module>
          <function>executeAction</function>
          <index>201</index> <!-- just after stage -->
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own amazons3 configuration section. This is an example configuration section with encryption disabled:

    <amazons3>
          <s3_bucket>example.com-backup/staging</s3_bucket>
    </amazons3>
          

    The following elements are part of the Amazon S3 configuration section:

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the Amazon S3 operation has to cross a midnite boundary in order to find data to write to the cloud. For instance, a warning would be generated if valid data was only found in the day before or day after the current day.

    Configuration for some users is such that the amazons3 operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    s3_bucket

    The name of the Amazon S3 bucket that data will be written to.

    This field configures the S3 bucket that your data will be written to. In S3, buckets are named globally. For uniqueness, you would typically use the name of your domain followed by some suffix, such as example.com-backup. If you want, you can specify a subdirectory within the bucket, such as example.com-backup/staging.

    Restrictions: Must be non-empty.

    encrypt

    Command used to encrypt backup data before upload to S3

    If this field is provided, then data will be encrypted before it is uploaded to Amazon S3. You must provide the entire command used to encrypt a file, including the ${input} and ${output} variables. An example GPG command is shown above, but you can use any mechanism you choose. The command will be run as the configured backup user.

    Restrictions: If provided, must be non-empty.

    full_size_limit

    Maximum size of a full backup

    If this field is provided, then a size limit will be applied to full backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a value as described above, greater than zero.

    incr_size_limit

    Maximum size of an incremental backup

    If this field is provided, then a size limit will be applied to incremental backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a value as described above, greater than zero.

    CedarBackup2-2.26.5/doc/manual/ch03s03.html0000664000175000017500000002152412642035647021566 0ustar pronovicpronovic00000000000000Installing from Source

    Installing from Source

    On platforms other than Debian, Cedar Backup is installed from a Python source distribution. [16] You will have to manage dependencies on your own.

    Tip

    Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out AppendixB, Dependencies. This appendix provides links to upstream source packages, plus as much information as I have been able to gather about packages for non-Debian platforms.

    Installing Dependencies

    Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met.

    Cedar Backup is written in Python 2 and requires version 2.7 or greater of the language. Python 2.7 was originally released on 4 Jul 2010, and is the last supported release of Python 2. As of this writing, all current Linux and BSD distributions include it. You must install Python 2 on every peer node in a pool (master or client).

    Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines.

    Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action:

    • mkisofs

    • eject

    • mount

    • unmount

    • volname

    Then, you need this utility if you are writing CD media:

    • cdrecord

    or these utilities if you are writing DVD media:

    • growisofs

    All of these utilities are common and are easy to find for almost any UNIX-like operating system.

    Installing the Source Package

    Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py.

    Once you have downloaded the source package from the Cedar Solutions website, [15] untar it:

    $ zcat CedarBackup2-2.0.0.tar.gz | tar xvf -
             

    This will create a directory called (in this case) CedarBackup2-2.0.0. The version number in the directory will always match the version number in the filename.

    If you have root access and want to install the package to the standard Python location on your system, then you can install the package in two simple steps:

    $ cd CedarBackup2-2.0.0
    $ python setup.py install
             

    Make sure that you are using Python 2.7 or better to execute setup.py.

    You may also wish to run the unit tests before actually installing anything. Run them like so:

    python util/test.py
             

    If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. [17] This is particularly important for non-Linux platforms where I do not have a test system available to me.

    Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the --help option:

    $ python setup.py --help
    $ python setup.py install --help
             

    In any case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    CedarBackup2-2.26.5/doc/manual/ch02s09.html0000664000175000017500000001040612642035647021570 0ustar pronovicpronovic00000000000000Extensions

    Extensions

    Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of collect step.

    Prior to Cedar Backup version 2, any such integration would have been completely independent of Cedar Backup itself. The external backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration.

    Starting with version 2, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process (i.e. not collect, stage, store or purge), but can be executed by Cedar Backup when properly configured.

    Extension authors implement an action process function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback command line like any other action.

    Hopefully, as the Cedar Backup user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase.

    Note

    Users should see Chapter5, Configuration for more information on how extensions are configured, and Chapter6, Official Extensions for details on all of the officially-supported extensions.

    Developers may be interested in AppendixA, Extension Architecture Interface.

    CedarBackup2-2.26.5/doc/manual/apcs02.html0000664000175000017500000002463612642035647021602 0ustar pronovicpronovic00000000000000Recovering Filesystem Data

    Recovering Filesystem Data

    Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before .tar), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration.

    If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week.

    Full Restore

    To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.)

    All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location.

    For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/):

    root:/# bzcat boot.tar.bz2 | tar xvf -
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /.

    root:/tmp# bzcat boot.tar.bz2 | tar xvf -
             

    Again, use zcat or just cat as appropriate.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    Partial Restore

    Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things).

    The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file — since the same file, if changed frequently, would appear in more than one backup.

    Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known contact with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place.

    Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup:

    root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    The tvf tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less

    If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there.

    Once you have found your file, extract it using xvf:

    root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file
             

    Again, use zcat or just cat as appropriate.

    Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    CedarBackup2-2.26.5/doc/manual/ch01s03.html0000664000175000017500000001366612642035647021574 0ustar pronovicpronovic00000000000000How to Get Support

    How to Get Support

    Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see.

    If you experience a problem, your best bet is to file an issue in the issue tracker at BitBucket. [1] When the source code was hosted at SourceForge, there was a mailing list. However, it was very lightly used in the last years before I abandoned SourceForge, and I have decided not to replace it.

    If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write . That mail will go directly to me. If you write the support address about a bug, a scrubbed bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency.

    Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. [2]

    In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them.

    Tip

    Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the --stack option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well.

    CedarBackup2-2.26.5/doc/manual/ch04s04.html0000664000175000017500000004607512642035647021600 0ustar pronovicpronovic00000000000000The cback-span command

    The cback-span command

    Introduction

    Cedar Backup was designed — and is still primarily focused — around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data.

    However, some users have expressed a need to write these large kinds of backups to disc — if not every day, then at least occassionally. The cback-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback-span to split that data between multiple discs.

    cback-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs.

    cback-span accepts many of the same command-line options as cback, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension).

    In order to use cback-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently.

    Syntax

    The cback-span command has the following syntax:

     Usage: cback-span [switches]
    
     Cedar Backup 'span' tool.
    
     This Cedar Backup utility spans staged data between multiple discs.
     It is a utility, not an extension, and requires user interaction.
    
     The following switches are accepted, mostly to set up underlying
     Cedar Backup functionality:
    
       -h, --help     Display this usage/help listing
       -V, --version  Display version information
       -b, --verbose  Print verbose output as well as logging to disk
       -c, --config   Path to config file (default: /etc/cback.conf)
       -l, --logfile  Path to logfile (default: /var/log/cback.log)
       -o, --owner    Logfile ownership, user:group (default: root:adm)
       -m, --mode     Octal logfile permissions mode (default: 640)
       -O, --output   Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug    Write debugging information to the log (implies --output)
       -s, --stack    Dump a Python stack trace instead of swallowing exceptions
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    Using cback-span

    As discussed above, the cback-span is an interactive command. It cannot be run from cron.

    You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage.

    The cushion percentage is used by cback-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly.

    The fit algorithm tells cback-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm.

    The four available fit algorithms are:

    worst

    The worst-fit algorithm.

    The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing.

    best

    The best-fit algorithm.

    The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms.

    first

    The first-fit algorithm.

    The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting.

    alternate

    A hybrid algorithm that I call alternate-fit.

    This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items.

    Sample run

    Below is a log showing a sample cback-span run.

    ================================================
               Cedar Backup 'span' tool
    ================================================
    
    This the Cedar Backup span tool.  It is used to split up staging
    data when that staging data does not fit onto a single disc.
    
    This utility operates using Cedar Backup configuration.  Configuration
    specifies which staging directory to look at and which writer device
    and media type to use.
    
    Continue? [Y/n]: 
    ===
    
    Cedar Backup store configuration looks like this:
    
       Source Directory...: /tmp/staging
       Media Type.........: cdrw-74
       Device Type........: cdwriter
       Device Path........: /dev/cdrom
       Device SCSI ID.....: None
       Drive Speed........: None
       Check Data Flag....: True
       No Eject Flag......: False
    
    Is this OK? [Y/n]: 
    ===
    
    Please wait, indexing the source directory (this may take a while)...
    ===
    
    The following daily staging directories have not yet been written to disc:
    
       /tmp/staging/2007/02/07
       /tmp/staging/2007/02/08
       /tmp/staging/2007/02/09
       /tmp/staging/2007/02/10
       /tmp/staging/2007/02/11
       /tmp/staging/2007/02/12
       /tmp/staging/2007/02/13
       /tmp/staging/2007/02/14
    
    The total size of the data in these directories is 1.00 GB.
    
    Continue? [Y/n]: 
    ===
    
    Based on configuration, the capacity of your media is 650.00 MB.
    
    Since estimates are not perfect and there is some uncertainly in
    media capacity calculations, it is good to have a "cushion",
    a percentage of capacity to set aside.  The cushion reduces the
    capacity of your media, so a 1.5% cushion leaves 98.5% remaining.
    
    What cushion percentage? [4.00]: 
    ===
    
    The real capacity, taking into account the 4.00% cushion, is 627.25 MB.
    It will take at least 2 disc(s) to store your 1.00 GB of data.
    
    Continue? [Y/n]: 
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: 
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "worst-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 246 files, 615.97 MB, 98.20% utilization
    Disc 2: 8 files, 412.96 MB, 65.84% utilization
    
    Accept this solution? [Y/n]: n
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: alternate
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "alternate-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 73 files, 627.25 MB, 100.00% utilization
    Disc 2: 181 files, 401.68 MB, 64.04% utilization
    
    Accept this solution? [Y/n]: y
    ===
    
    Please place the first disc in your backup device.
    Press return when ready.
    ===
    
    Initializing image...
    Writing image to disc...
             
    CedarBackup2-2.26.5/doc/manual/pr01s05.html0000664000175000017500000000402612642035647021613 0ustar pronovicpronovic00000000000000Acknowledgments

    Acknowledgments

    The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license.

    CedarBackup2-2.26.5/doc/manual/pr01s02.html0000664000175000017500000000364412642035647021615 0ustar pronovicpronovic00000000000000Audience

    Audience

    This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces.

    CedarBackup2-2.26.5/doc/manual/index.html0000664000175000017500000003767412642035647021627 0ustar pronovicpronovic00000000000000Cedar Backup 2 Software Manual

    Cedar Backup 2 Software Manual

    Kenneth J. Pronovici

    This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation.

    For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work.

    This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA


    Table of Contents

    Preface
    Purpose
    Audience
    Conventions Used in This Book
    Typographic Conventions
    Icons
    Organization of This Manual
    Acknowledgments
    1. Introduction
    What is Cedar Backup?
    Migrating from Version 2 to Version 3
    How to Get Support
    History
    2. Basic Concepts
    General Architecture
    Data Recovery
    Cedar Backup Pools
    The Backup Process
    The Collect Action
    The Stage Action
    The Store Action
    The Purge Action
    The All Action
    The Validate Action
    The Initialize Action
    The Rebuild Action
    Coordination between Master and Clients
    Managed Backups
    Media and Device Types
    Incremental Backups
    Extensions
    3. Installation
    Background
    Installing on a Debian System
    Installing from Source
    Installing Dependencies
    Installing the Source Package
    4. Command Line Tools
    Overview
    The cback command
    Introduction
    Syntax
    Switches
    Actions
    The cback-amazons3-sync command
    Introduction
    Syntax
    Switches
    The cback-span command
    Introduction
    Syntax
    Switches
    Using cback-span
    Sample run
    5. Configuration
    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy
    6. Official Extensions
    System Information Extension
    Amazon S3 Extension
    Subversion Extension
    MySQL Extension
    PostgreSQL Extension
    Mbox Extension
    Encrypt Extension
    Split Extension
    Capacity Extension
    A. Extension Architecture Interface
    B. Dependencies
    C. Data Recovery
    Finding your Data
    Recovering Filesystem Data
    Full Restore
    Partial Restore
    Recovering MySQL Data
    Recovering Subversion Data
    Recovering Mailbox Data
    Recovering Data split by the Split Extension
    D. Securing Password-less SSH Connections
    E. Copyright
    CedarBackup2-2.26.5/doc/manual/ch04s03.html0000664000175000017500000003121612642035647021566 0ustar pronovicpronovic00000000000000The cback-amazons3-sync command

    The cback-amazons3-sync command

    Introduction

    The cback-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process.

    This might be a good option for some types of data, as long as you understand the limitations around retrieving previous versions of objects that get modified or deleted as part of a sync. S3 does support versioning, but it won't be quite as easy to get at those previous versions as with an explicit incremental backup like cback provides. Cedar Backup does not provide any tooling that would help you retrieve previous versions.

    The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The aws command will be executed as the same user that is executing the cback-amazons3-sync command, so make sure you configure it as the proper user. (This is different than the amazons3 extension, which is designed to execute as root and switches over to the configured backup user to execute AWS CLI commands.)

    Syntax

    The cback-amazons3-sync command has the following syntax:

     Usage: cback-amazons3-sync [switches] sourceDir s3bucketUrl
    
     Cedar Backup Amazon S3 sync tool.
    
     This Cedar Backup utility synchronizes a local directory to an Amazon S3
     bucket.  After the sync is complete, a validation step is taken.  An
     error is reported if the contents of the bucket do not match the
     source directory, or if the indicated size for any file differs.
     This tool is a wrapper over the AWS CLI command-line tool.
    
     The following arguments are required:
    
       sourceDir            The local source directory on disk (must exist)
       s3BucketUrl          The URL to the target Amazon S3 bucket
    
     The following switches are accepted:
    
       -h, --help           Display this usage/help listing
       -V, --version        Display version information
       -b, --verbose        Print verbose output as well as logging to disk
       -q, --quiet          Run quietly (display no output to the screen)
       -l, --logfile        Path to logfile (default: /var/log/cback.log)
       -o, --owner          Logfile ownership, user:group (default: root:adm)
       -m, --mode           Octal logfile permissions mode (default: 640)
       -O, --output         Record some sub-command (i.e. aws) output to the log
       -d, --debug          Write debugging information to the log (implies --output)
       -s, --stack          Dump Python stack trace instead of swallowing exceptions
       -D, --diagnostics    Print runtime diagnostics to the screen and exit
       -v, --verifyOnly     Only verify the S3 bucket contents, do not make changes
       -w, --ignoreWarnings Ignore warnings about problematic filename encodings
    
     Typical usage would be something like:
    
       cback-amazons3-sync /home/myuser s3://example.com-backup/myuser
    
     This will sync the contents of /home/myuser into the indicated bucket.
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback-amazons3-sync command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback-amazons3-sync command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    -v, --verifyOnly

    Only verify the S3 bucket contents against the directory on disk. Do not make any changes to the S3 bucket or transfer any files. This is intended as a quick check to see whether the sync is up-to-date.

    Although no files are transferred, the tool will still execute the source filename encoding check, discussed below along with --ignoreWarnings.

    -w, --ignoreWarnings

    The AWS CLI S3 sync process is very picky about filename encoding. Files that the Linux filesystem handles with no problems can cause problems in S3 if the filename cannot be encoded properly in your configured locale. As of this writing, filenames like this will cause the sync process to abort without transferring all files as expected.

    To avoid confusion, the cback-amazons3-sync tries to guess which files in the source directory will cause problems, and refuses to execute the AWS CLI S3 sync if any problematic files exist. If you'd rather proceed anyway, use --ignoreWarnings.

    If problematic files are found, then you have basically two options: either correct your locale (i.e. if you have set LANG=C) or rename the file so it can be encoded properly in your locale. The error messages will tell you the expected encoding (from your locale) and the actual detected encoding for the filename.

    CedarBackup2-2.26.5/doc/manual/apcs05.html0000664000175000017500000001217612642035647021601 0ustar pronovicpronovic00000000000000Recovering Mailbox Data

    Recovering Mailbox Data

    Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring.

    Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration.

    There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date.

    Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any).

    Here is an example for a single backed-up file:

    root:/tmp# rm restore.mbox # make sure it's not left over
    root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox
    root:/tmp# grepmail -a -u restore.mbox > nodups.mbox
          

    At this point, nodups.mbox contains all of the backed-up messages from /home/user/mail/greylist.

    Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat.

    If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case.

    CedarBackup2-2.26.5/doc/manual/ch06s05.html0000664000175000017500000002272512642035647021577 0ustar pronovicpronovic00000000000000PostgreSQL Extension

    PostgreSQL Extension

    The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL [27] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file.

    This extension always produces a full backup. There is currently no facility for making incremental backups.

    Warning

    Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>postgresql</name>
          <module>CedarBackup2.extend.postgresql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>Y</all>
    </postgresql>
          

    If you decide to back up specific databases, then you would list them individually, like this:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>N</all>
       <database>db1</database>
       <database>db2</database>
    </postgresql>
          

    The following elements are part of the PostgreSQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user.

    This value is optional.

    Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    CedarBackup2-2.26.5/doc/manual/ch02.html0000664000175000017500000001400112642035647021227 0ustar pronovicpronovic00000000000000Chapter2.Basic Concepts

    Chapter2.Basic Concepts

    General Architecture

    Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality.

    The cback script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback runs setuid[8] or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user.

    The cback script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/cback.conf, but this path can be overridden at runtime. See Chapter5, Configuration for more information on how Cedar Backup is configured.

    Warning

    You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also the section called “Encrypt Extension”.

    CedarBackup2-2.26.5/doc/manual/manual.html0000664000175000017500000166464712642035647022004 0ustar pronovicpronovic00000000000000Cedar Backup 2 Software Manual

    Cedar Backup 2 Software Manual

    Kenneth J. Pronovici

    This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation.

    For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work.

    This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA


    Table of Contents

    Preface
    Purpose
    Audience
    Conventions Used in This Book
    Typographic Conventions
    Icons
    Organization of This Manual
    Acknowledgments
    1. Introduction
    What is Cedar Backup?
    Migrating from Version 2 to Version 3
    How to Get Support
    History
    2. Basic Concepts
    General Architecture
    Data Recovery
    Cedar Backup Pools
    The Backup Process
    The Collect Action
    The Stage Action
    The Store Action
    The Purge Action
    The All Action
    The Validate Action
    The Initialize Action
    The Rebuild Action
    Coordination between Master and Clients
    Managed Backups
    Media and Device Types
    Incremental Backups
    Extensions
    3. Installation
    Background
    Installing on a Debian System
    Installing from Source
    Installing Dependencies
    Installing the Source Package
    4. Command Line Tools
    Overview
    The cback command
    Introduction
    Syntax
    Switches
    Actions
    The cback-amazons3-sync command
    Introduction
    Syntax
    Switches
    The cback-span command
    Introduction
    Syntax
    Switches
    Using cback-span
    Sample run
    5. Configuration
    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy
    6. Official Extensions
    System Information Extension
    Amazon S3 Extension
    Subversion Extension
    MySQL Extension
    PostgreSQL Extension
    Mbox Extension
    Encrypt Extension
    Split Extension
    Capacity Extension
    A. Extension Architecture Interface
    B. Dependencies
    C. Data Recovery
    Finding your Data
    Recovering Filesystem Data
    Full Restore
    Partial Restore
    Recovering MySQL Data
    Recovering Subversion Data
    Recovering Mailbox Data
    Recovering Data split by the Split Extension
    D. Securing Password-less SSH Connections
    E. Copyright

    Preface

    Purpose

    This software manual has been written to document version 2 of Cedar Backup, originally released in early 2005.

    Audience

    This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces.

    Conventions Used in This Book

    This section covers the various conventions used in this manual.

    Typographic Conventions

    Term

    Used for first use of important terms.

    Command

    Used for commands, command output, and switches

    Replaceable

    Used for replaceable items in code and text

    Filenames

    Used for file and directory names

    Icons

    Note

    This icon designates a note relating to the surrounding text.

    Tip

    This icon designates a helpful tip relating to the surrounding text.

    Warning

    This icon designates a warning relating to the surrounding text.

    Organization of This Manual

    Chapter1, Introduction

    Provides some some general history about Cedar Backup, what needs it is intended to meet, how to get support, and how to migrate from version 2 to version 3.

    Chapter2, Basic Concepts

    Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual.

    Chapter3, Installation

    Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package.

    Chapter4, Command Line Tools

    Discusses the various Cedar Backup command-line tools, including the primary cback command.

    Chapter5, Configuration

    Provides detailed information about how to configure Cedar Backup.

    Chapter6, Official Extensions

    Describes each of the officially-supported Cedar Backup extensions.

    AppendixA, Extension Architecture Interface

    Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup.

    AppendixB, Dependencies

    Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems.

    AppendixC, Data Recovery

    Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from.

    AppendixD, Securing Password-less SSH Connections

    Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised.

    Acknowledgments

    The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license.

    Chapter1.Introduction

    Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.— Linus Torvalds, at the release of Linux 2.0.8 in July of 1996.

    What is Cedar Backup?

    Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources.

    Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis.

    Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media.

    Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 2 programming language.

    There are many different backup software implementations out there in the open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data on a regular basis. Cedar Backup isn't for you if you want to back up your huge MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set of machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, Subversion or Mercurial repositories, or small MySQL databases, then Cedar Backup is probably worth your time.

    Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 2, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems.

    To run a Cedar Backup client, you really just need a working Python 2 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images or talking to the Amazon S3 infrastructure. A full list of dependencies is provided in the section called “Installing Dependencies”.

    Migrating from Version 2 to Version 3

    The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix-and-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end-of-life in 2020, but you should plan to migrate sooner than that if possible.

    A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used "cback", version 3 uses "cback3": cback3.conf instead of cback.conf, cback3.log instead of cback.log, etc.

    So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup.

    How to Get Support

    Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see.

    If you experience a problem, your best bet is to file an issue in the issue tracker at BitBucket. [1] When the source code was hosted at SourceForge, there was a mailing list. However, it was very lightly used in the last years before I abandoned SourceForge, and I have decided not to replace it.

    If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write . That mail will go directly to me. If you write the support address about a bug, a scrubbed bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency.

    Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. [2]

    In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them.

    Tip

    Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the --stack option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well.

    History

    Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain.

    In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead.

    Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. [3] At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code).

    Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) [4] and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release.

    Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code.

    In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, [5] and updated the code to use the newly-released Python logging package [6] after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code.

    So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result was the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. [7]

    The 3.0 release of Cedar Backup is a Python 3 conversion of the 2.0 release, with minimal additional functionality. The conversion from Python 2 to Python 3 started in mid-2015, about 5 years before the anticipated deprecation of Python 2 in 2020. Most users should consider transitioning to the 3.0 release.



    [2] See Simon Tatham's excellent bug reporting tutorial: http://www.chiark.greenend.org.uk/~sgtatham/bugs.html .

    [4] Debian's stable releases are named after characters in the Toy Story movie.

    [5] Epydoc is a Python code documentation tool. See http://epydoc.sourceforge.net/.

    [7] Tests are implemented using Python's unit test framework. See http://docs.python.org/lib/module-unittest.html.

    Chapter2.Basic Concepts

    General Architecture

    Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality.

    The cback script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback runs setuid[8] or setgid. However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user.

    The cback script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/cback.conf, but this path can be overridden at runtime. See Chapter5, Configuration for more information on how Cedar Backup is configured.

    Warning

    You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also the section called “Encrypt Extension”.

    Data Recovery

    Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in AppendixC, Data Recovery) can handle the task of restoring their own system, using the standard system tools at hand.

    If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category.

    My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need.

    Cedar Backup Pools

    There are two kinds of machines in a Cedar Backup pool. One machine (the master) has a CD or DVD writer on it and writes the backup to disc. The others (clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines.

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way.

    The Backup Process

    The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control.

    This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See the section called “Coordination between Master and Clients” (later in this chapter) for more information on this subject.

    A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge.

    In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order.

    The cback command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below.

    See Chapter5, Configuration for more information on how a backup run is configured.

    The Collect Action

    The collect action is the first action in a standard backup run. It executes on both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2).

    There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up.

    Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file [9] or specify absolute paths or filename patterns [10] to be excluded. You can even configure a backup link farm rather than explicitly listing files and directories in configuration.

    This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a consolidation point to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action).

    The Stage Action

    The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name.

    For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer.

    Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh.

    If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running.

    Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc.

    Note

    Directories collected by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged.

    The Store Action

    The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful.

    If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the --full option is passed to the cback command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs.

    This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine.

    Warning

    The store action is not supported on the Mac OS X (darwin) platform. On that platform, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    The Purge Action

    The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged.

    Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration.

    The All Action

    The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line.

    Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. [11]

    The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions.

    The Validate Action

    The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line.

    The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.).

    The Initialize Action

    The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device.

    However, if the check media store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized.

    Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with CEDAR BACKUP).

    Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label).

    The Rebuild Action

    The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line.

    The rebuild action attempts to rebuild this week's disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason.

    To decide what data to write to disc again, the rebuild action looks back and finds the first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session.

    The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action.

    Coordination between Master and Clients

    Unless you are using Cedar Backup to manage a pool of one, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult — it mostly consists of making sure that operations happen in the right order — but some users are suprised that it is required and want to know why Cedar Backup can't just take care of it for me.

    Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged.

    Managed Backups

    Cedar Backup also supports an optional feature called the managed backup. This feature is intended for use with remote clients where cron is not available.

    When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell.

    To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients.

    Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time.

    However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature.

    Media and Device Types

    Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. [12]

    When using a new enough backup device, a new multisession ISO image [13] is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images — which is really unusual today — then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the daily backup mode to avoid losing data).

    Cedar Backup currently supports four different kinds of CD media:

    cdr-74

    74-minute non-rewritable CD media

    cdrw-74

    74-minute rewritable CD media

    cdr-80

    80-minute non-rewritable CD media

    cdrw-80

    80-minute rewritable CD media

    I have chosen to support just these four types of CD media because they seem to be the most standard of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable.

    Cedar Backup also supports two kinds of DVD media:

    dvd+r

    Single-layer non-rewritable DVD+R media

    dvd+rw

    Single-layer rewritable DVD+RW media

    The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type.

    Incremental Backups

    Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the --full option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis.

    In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value [14] for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged.

    Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the --full option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week.

    Extensions

    Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of collect step.

    Prior to Cedar Backup version 2, any such integration would have been completely independent of Cedar Backup itself. The external backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration.

    Starting with version 2, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process (i.e. not collect, stage, store or purge), but can be executed by Cedar Backup when properly configured.

    Extension authors implement an action process function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback command line like any other action.

    Hopefully, as the Cedar Backup user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase.

    Note

    Users should see Chapter5, Configuration for more information on how extensions are configured, and Chapter6, Official Extensions for details on all of the officially-supported extensions.

    Developers may be interested in AppendixA, Extension Architecture Interface.



    [9] Analagous to .cvsignore in CVS

    [10] In terms of Python regular expressions

    [11] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works.

    [12] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVDRW drive.

    [13] An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a filesystem-within-a-file and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: http://en.wikipedia.org/wiki/ISO_image.

    [14] The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: http://en.wikipedia.org/wiki/SHA-1.

    Chapter3.Installation

    Background

    There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc.

    If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself.

    Installing on a Debian System

    The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude.

    If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian etch release is the first release to contain Cedar Backup 2.) Otherwise, you need to install from the Cedar Solutions APT data source. [15] To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file.

    After you have configured the proper APT data source, install Cedar Backup using this set of commands:

    $ apt-get update
    $ apt-get install cedar-backup2 cedar-backup2-doc
          

    Several of the Cedar Backup dependencies are listed as recommended rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them.

    If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source.

    In either case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    Note

    The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package.

    Installing from Source

    On platforms other than Debian, Cedar Backup is installed from a Python source distribution. [16] You will have to manage dependencies on your own.

    Tip

    Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out AppendixB, Dependencies. This appendix provides links to upstream source packages, plus as much information as I have been able to gather about packages for non-Debian platforms.

    Installing Dependencies

    Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met.

    Cedar Backup is written in Python 2 and requires version 2.7 or greater of the language. Python 2.7 was originally released on 4 Jul 2010, and is the last supported release of Python 2. As of this writing, all current Linux and BSD distributions include it. You must install Python 2 on every peer node in a pool (master or client).

    Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines.

    Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action:

    • mkisofs

    • eject

    • mount

    • unmount

    • volname

    Then, you need this utility if you are writing CD media:

    • cdrecord

    or these utilities if you are writing DVD media:

    • growisofs

    All of these utilities are common and are easy to find for almost any UNIX-like operating system.

    Installing the Source Package

    Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py.

    Once you have downloaded the source package from the Cedar Solutions website, [15] untar it:

    $ zcat CedarBackup2-2.0.0.tar.gz | tar xvf -
             

    This will create a directory called (in this case) CedarBackup2-2.0.0. The version number in the directory will always match the version number in the filename.

    If you have root access and want to install the package to the standard Python location on your system, then you can install the package in two simple steps:

    $ cd CedarBackup2-2.0.0
    $ python setup.py install
             

    Make sure that you are using Python 2.7 or better to execute setup.py.

    You may also wish to run the unit tests before actually installing anything. Run them like so:

    python util/test.py
             

    If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. [17] This is particularly important for non-Linux platforms where I do not have a test system available to me.

    Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the --help option:

    $ python setup.py --help
    $ python setup.py install --help
             

    In any case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration.

    Chapter4.Command Line Tools

    Overview

    Cedar Backup comes with three command-line programs: cback, cback-amazons3-sync, and cback-span.

    The cback command is the primary command line interface and the only Cedar Backup program that most users will ever need.

    The cback-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process.

    Users who have a lot of data to back up — more than will fit on a single CD or DVD — can use the interactive cback-span tool to split their data between multiple discs.

    The cback command

    Introduction

    Cedar Backup's primary command-line interface is the cback command. It controls the entire backup process.

    Syntax

    The cback command has the following syntax:

     Usage: cback [switches] action(s)
    
     The following switches are accepted:
    
       -h, --help         Display this usage/help listing
       -V, --version      Display version information
       -b, --verbose      Print verbose output as well as logging to disk
       -q, --quiet        Run quietly (display no output to the screen)
       -c, --config       Path to config file (default: /etc/cback.conf)
       -f, --full         Perform a full backup, regardless of configuration
       -M, --managed      Include managed clients when executing actions
       -N, --managed-only Include ONLY managed clients when executing actions
       -l, --logfile      Path to logfile (default: /var/log/cback.log)
       -o, --owner        Logfile ownership, user:group (default: root:adm)
       -m, --mode         Octal logfile permissions mode (default: 640)
       -O, --output       Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug        Write debugging information to the log (implies --output)
       -s, --stack        Dump a Python stack trace instead of swallowing exceptions
       -D, --diagnostics  Print runtime diagnostics to the screen and exit
    
     The following actions may be specified:
    
       all                Take all normal actions (collect, stage, store, purge)
       collect            Take the collect action
       stage              Take the stage action
       store              Take the store action
       purge              Take the purge action
       rebuild            Rebuild "this week's" disc if possible
       validate           Validate configuration only
       initialize         Initialize media for use with Cedar Backup
    
     You may also specify extended actions that have been defined in
     configuration.
    
     You must specify at least one action to take.  More than one of
     the "collect", "stage", "store" or "purge" actions and/or
     extended actions may be specified in any arbitrary order; they
     will be executed in a sensible order.  The "all", "rebuild",
     "validate", and "initialize" actions may not be combined with
     other actions.
             

    Note that the all action only executes the standard four actions. It never executes any of the configured extensions. [18]

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf.

    -f, --full

    Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started.

    -M, --managed

    Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally.

    -N, --managed-only

    Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client — but do not execute the action locally.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    Actions

    You can find more information about the various actions in the section called “The Backup Process” (in Chapter2, Basic Concepts). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions).

    If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however.

    The cback-amazons3-sync command

    Introduction

    The cback-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process.

    This might be a good option for some types of data, as long as you understand the limitations around retrieving previous versions of objects that get modified or deleted as part of a sync. S3 does support versioning, but it won't be quite as easy to get at those previous versions as with an explicit incremental backup like cback provides. Cedar Backup does not provide any tooling that would help you retrieve previous versions.

    The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The aws command will be executed as the same user that is executing the cback-amazons3-sync command, so make sure you configure it as the proper user. (This is different than the amazons3 extension, which is designed to execute as root and switches over to the configured backup user to execute AWS CLI commands.)

    Syntax

    The cback-amazons3-sync command has the following syntax:

     Usage: cback-amazons3-sync [switches] sourceDir s3bucketUrl
    
     Cedar Backup Amazon S3 sync tool.
    
     This Cedar Backup utility synchronizes a local directory to an Amazon S3
     bucket.  After the sync is complete, a validation step is taken.  An
     error is reported if the contents of the bucket do not match the
     source directory, or if the indicated size for any file differs.
     This tool is a wrapper over the AWS CLI command-line tool.
    
     The following arguments are required:
    
       sourceDir            The local source directory on disk (must exist)
       s3BucketUrl          The URL to the target Amazon S3 bucket
    
     The following switches are accepted:
    
       -h, --help           Display this usage/help listing
       -V, --version        Display version information
       -b, --verbose        Print verbose output as well as logging to disk
       -q, --quiet          Run quietly (display no output to the screen)
       -l, --logfile        Path to logfile (default: /var/log/cback.log)
       -o, --owner          Logfile ownership, user:group (default: root:adm)
       -m, --mode           Octal logfile permissions mode (default: 640)
       -O, --output         Record some sub-command (i.e. aws) output to the log
       -d, --debug          Write debugging information to the log (implies --output)
       -s, --stack          Dump Python stack trace instead of swallowing exceptions
       -D, --diagnostics    Print runtime diagnostics to the screen and exit
       -v, --verifyOnly     Only verify the S3 bucket contents, do not make changes
       -w, --ignoreWarnings Ignore warnings about problematic filename encodings
    
     Typical usage would be something like:
    
       cback-amazons3-sync /home/myuser s3://example.com-backup/myuser
    
     This will sync the contents of /home/myuser into the indicated bucket.
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -q, --quiet

    Run quietly (display no output to the screen).

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback-amazons3-sync command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback-amazons3-sync command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    -D, --diagnostics

    Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report.

    -v, --verifyOnly

    Only verify the S3 bucket contents against the directory on disk. Do not make any changes to the S3 bucket or transfer any files. This is intended as a quick check to see whether the sync is up-to-date.

    Although no files are transferred, the tool will still execute the source filename encoding check, discussed below along with --ignoreWarnings.

    -w, --ignoreWarnings

    The AWS CLI S3 sync process is very picky about filename encoding. Files that the Linux filesystem handles with no problems can cause problems in S3 if the filename cannot be encoded properly in your configured locale. As of this writing, filenames like this will cause the sync process to abort without transferring all files as expected.

    To avoid confusion, the cback-amazons3-sync tries to guess which files in the source directory will cause problems, and refuses to execute the AWS CLI S3 sync if any problematic files exist. If you'd rather proceed anyway, use --ignoreWarnings.

    If problematic files are found, then you have basically two options: either correct your locale (i.e. if you have set LANG=C) or rename the file so it can be encoded properly in your locale. The error messages will tell you the expected encoding (from your locale) and the actual detected encoding for the filename.

    The cback-span command

    Introduction

    Cedar Backup was designed — and is still primarily focused — around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data.

    However, some users have expressed a need to write these large kinds of backups to disc — if not every day, then at least occassionally. The cback-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback-span to split that data between multiple discs.

    cback-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs.

    cback-span accepts many of the same command-line options as cback, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension).

    In order to use cback-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently.

    Syntax

    The cback-span command has the following syntax:

     Usage: cback-span [switches]
    
     Cedar Backup 'span' tool.
    
     This Cedar Backup utility spans staged data between multiple discs.
     It is a utility, not an extension, and requires user interaction.
    
     The following switches are accepted, mostly to set up underlying
     Cedar Backup functionality:
    
       -h, --help     Display this usage/help listing
       -V, --version  Display version information
       -b, --verbose  Print verbose output as well as logging to disk
       -c, --config   Path to config file (default: /etc/cback.conf)
       -l, --logfile  Path to logfile (default: /var/log/cback.log)
       -o, --owner    Logfile ownership, user:group (default: root:adm)
       -m, --mode     Octal logfile permissions mode (default: 640)
       -O, --output   Record some sub-command (i.e. cdrecord) output to the log
       -d, --debug    Write debugging information to the log (implies --output)
       -s, --stack    Dump a Python stack trace instead of swallowing exceptions
             

    Switches

    -h, --help

    Display usage/help listing.

    -V, --version

    Display version information.

    -b, --verbose

    Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen.

    -c, --config

    Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf.

    -l, --logfile

    Specify the path to an alternate logfile. The default logfile file is /var/log/cback.log.

    -o, --owner

    Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values.

    -m, --mode

    Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode.

    -O, --output

    Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media.

    -d, --debug

    Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well.

    -s, --stack

    Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report.

    Using cback-span

    As discussed above, the cback-span is an interactive command. It cannot be run from cron.

    You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage.

    The cushion percentage is used by cback-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly.

    The fit algorithm tells cback-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm.

    The four available fit algorithms are:

    worst

    The worst-fit algorithm.

    The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing.

    best

    The best-fit algorithm.

    The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms.

    first

    The first-fit algorithm.

    The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting.

    alternate

    A hybrid algorithm that I call alternate-fit.

    This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items.

    Sample run

    Below is a log showing a sample cback-span run.

    ================================================
               Cedar Backup 'span' tool
    ================================================
    
    This the Cedar Backup span tool.  It is used to split up staging
    data when that staging data does not fit onto a single disc.
    
    This utility operates using Cedar Backup configuration.  Configuration
    specifies which staging directory to look at and which writer device
    and media type to use.
    
    Continue? [Y/n]: 
    ===
    
    Cedar Backup store configuration looks like this:
    
       Source Directory...: /tmp/staging
       Media Type.........: cdrw-74
       Device Type........: cdwriter
       Device Path........: /dev/cdrom
       Device SCSI ID.....: None
       Drive Speed........: None
       Check Data Flag....: True
       No Eject Flag......: False
    
    Is this OK? [Y/n]: 
    ===
    
    Please wait, indexing the source directory (this may take a while)...
    ===
    
    The following daily staging directories have not yet been written to disc:
    
       /tmp/staging/2007/02/07
       /tmp/staging/2007/02/08
       /tmp/staging/2007/02/09
       /tmp/staging/2007/02/10
       /tmp/staging/2007/02/11
       /tmp/staging/2007/02/12
       /tmp/staging/2007/02/13
       /tmp/staging/2007/02/14
    
    The total size of the data in these directories is 1.00 GB.
    
    Continue? [Y/n]: 
    ===
    
    Based on configuration, the capacity of your media is 650.00 MB.
    
    Since estimates are not perfect and there is some uncertainly in
    media capacity calculations, it is good to have a "cushion",
    a percentage of capacity to set aside.  The cushion reduces the
    capacity of your media, so a 1.5% cushion leaves 98.5% remaining.
    
    What cushion percentage? [4.00]: 
    ===
    
    The real capacity, taking into account the 4.00% cushion, is 627.25 MB.
    It will take at least 2 disc(s) to store your 1.00 GB of data.
    
    Continue? [Y/n]: 
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: 
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "worst-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 246 files, 615.97 MB, 98.20% utilization
    Disc 2: 8 files, 412.96 MB, 65.84% utilization
    
    Accept this solution? [Y/n]: n
    ===
    
    Which algorithm do you want to use to span your data across
    multiple discs?
    
    The following algorithms are available:
    
       first....: The "first-fit" algorithm
       best.....: The "best-fit" algorithm
       worst....: The "worst-fit" algorithm
       alternate: The "alternate-fit" algorithm
    
    If you don't like the results you will have a chance to try a
    different one later.
    
    Which algorithm? [worst]: alternate
    ===
    
    Please wait, generating file lists (this may take a while)...
    ===
    
    Using the "alternate-fit" algorithm, Cedar Backup can split your data
    into 2 discs.
    
    Disc 1: 73 files, 627.25 MB, 100.00% utilization
    Disc 2: 181 files, 401.68 MB, 64.04% utilization
    
    Accept this solution? [Y/n]: y
    ===
    
    Please place the first disc in your backup device.
    Press return when ready.
    ===
    
    Initializing image...
    Writing image to disc...
             


    [18] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. Better to be definitive than confusing.

    Chapter5.Configuration

    Table of Contents

    Overview
    Configuration File Format
    Sample Configuration File
    Reference Configuration
    Options Configuration
    Peers Configuration
    Collect Configuration
    Stage Configuration
    Store Configuration
    Purge Configuration
    Extensions Configuration
    Setting up a Pool of One
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Client Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure the master in your backup pool.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test your backup.
    Step 9: Modify the backup cron jobs.
    Setting up a Master Peer Node
    Step 1: Decide when you will run your backup.
    Step 2: Make sure email works.
    Step 3: Configure your writer device.
    Step 4: Configure your backup user.
    Step 5: Create your backup tree.
    Step 6: Create the Cedar Backup configuration file.
    Step 7: Validate the Cedar Backup configuration file.
    Step 8: Test connectivity to client machines.
    Step 9: Test your backup.
    Step 10: Modify the backup cron jobs.
    Configuring your Writer Device
    Device Types
    Devices identified by by device name
    Devices identified by SCSI id
    Linux Notes
    Finding your Linux CD Writer
    Mac OS X Notes
    Optimized Blanking Stategy

    Overview

    Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy.

    First, familiarize yourself with the concepts in Chapter2, Basic Concepts. In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in Chapter3, Installation.

    Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over the section called “The cback command” (in Chapter4, Command Line Tools) to become familiar with the command line interface. Then, look over the section called “Configuration File Format” (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback.conf) or in some other location.

    After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done.

    Configuration File Format

    Cedar Backup is configured through an XML [19] configuration file, usually called /etc/cback.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions.

    All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. [20] The extensions section is always optional and can be omitted unless extensions are in use.

    Note

    Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files Ken and ken might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ken will only match the file if it is actually on the filesystem with a lower-case k as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the Mac Mindset.

    Sample Configuration File

    Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes its sample in /usr/share/doc/cedar-backup2/examples/cback.conf.sample.

    This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections.

    <?xml version="1.0"?>
    <cb_config>
       <reference>
          <author>Kenneth J. Pronovici</author>
          <revision>1.3</revision>
          <description>Sample</description>
       </reference>
       <options>
          <starting_day>tuesday</starting_day>
          <working_dir>/opt/backup/tmp</working_dir>
          <backup_user>backup</backup_user>
          <backup_group>group</backup_group>
          <rcp_command>/usr/bin/scp -B</rcp_command>
       </options>
       <peers>
          <peer>
             <name>debian</name>
             <type>local</type>
             <collect_dir>/opt/backup/collect</collect_dir>
          </peer>
       </peers>
       <collect>
          <collect_dir>/opt/backup/collect</collect_dir>
          <collect_mode>daily</collect_mode>
          <archive_mode>targz</archive_mode>
          <ignore_file>.cbignore</ignore_file>
          <dir>
             <abs_path>/etc</abs_path>
             <collect_mode>incr</collect_mode>
          </dir>
          <file>
             <abs_path>/home/root/.profile</abs_path>
             <collect_mode>weekly</collect_mode>
          </file>
       </collect>
       <stage>
          <staging_dir>/opt/backup/staging</staging_dir>
       </stage>
       <store>
          <source_dir>/opt/backup/staging</source_dir>
          <media_type>cdrw-74</media_type>
          <device_type>cdwriter</device_type>
          <target_device>/dev/cdrw</target_device>
          <target_scsi_id>0,0,0</target_scsi_id>
          <drive_speed>4</drive_speed>
          <check_data>Y</check_data>
          <check_media>Y</check_media>
          <warn_midnite>Y</warn_midnite>
       </store>
       <purge>
          <dir>
             <abs_path>/opt/backup/stage</abs_path>
             <retain_days>7</retain_days>
          </dir>
          <dir>
             <abs_path>/opt/backup/collect</abs_path>
             <retain_days>0</retain_days>
          </dir>
       </purge>
    </cb_config>
             

    Reference Configuration

    The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired.

    This is an example reference configuration section:

    <reference>
       <author>Kenneth J. Pronovici</author>
       <revision>Revision 1.3</revision>
       <description>Sample</description>
       <generator>Yet to be Written Config Tool (tm)</description>
    </reference>
             

    The following elements are part of the reference configuration section:

    author

    Author of the configuration file.

    Restrictions: None

    revision

    Revision of the configuration file.

    Restrictions: None

    description

    Description of the configuration file.

    Restrictions: None

    generator

    Tool that generated the configuration file, if any.

    Restrictions: None

    Options Configuration

    The options configuration section contains configuration options that are not specific to any one action.

    This is an example options configuration section:

    <options>
       <starting_day>tuesday</starting_day>
       <working_dir>/opt/backup/tmp</working_dir>
       <backup_user>backup</backup_user>
       <backup_group>backup</backup_group>
       <rcp_command>/usr/bin/scp -B</rcp_command>
       <rsh_command>/usr/bin/ssh</rsh_command>
       <cback_command>/usr/bin/cback</cback_command>
       <managed_actions>collect, purge</managed_actions>
       <override>
          <command>cdrecord</command>
          <abs_path>/opt/local/bin/cdrecord</abs_path>
       </override>
       <override>
          <command>mkisofs</command>
          <abs_path>/opt/local/bin/mkisofs</abs_path>
       </override>
       <pre_action_hook>
          <action>collect</action>
          <command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command>
       </pre_action_hook>
       <post_action_hook>
          <action>collect</action>
          <command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command>
       </post_action_hook>
    </options>
             

    The following elements are part of the options configuration section:

    starting_day

    Day that starts the week.

    Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared.

    Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive.

    working_dir

    Working (temporary) directory to use for backups.

    This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups.

    The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master).

    Restrictions: Must be an absolute path

    backup_user

    Effective user that backups should run as.

    This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced).

    This value is also used as the default remote backup user for remote peers.

    Restrictions: Must be non-empty

    backup_group

    Effective group that backups should run as.

    This group must exist on the machine which is being configured, and should not be root or some other powerful group (although that restriction is not enforced).

    Restrictions: Must be non-empty

    rcp_command

    Default rcp-compatible copy command for staging.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway.

    Restrictions: Must be non-empty

    rsh_command

    Default rsh-compatible command to use for remote shells.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty

    cback_command

    Default cback-compatible command to use on managed remote clients.

    The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Default set of actions that are managed on remote clients.

    This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge.

    This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly.

    Restrictions: Must be non-empty.

    override

    Command to override with a customized path.

    This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    command

    Name of the command to be overridden, i.e. cdrecord.

    Restrictions: Must be a non-empty string.

    abs_path

    The absolute path where the overridden command can be found.

    Restrictions: Must be an absolute path.

    pre_action_hook

    Hook configuring a command to be executed before an action.

    This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    post_action_hook

    Hook configuring a command to be executed after an action.

    This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions.

    This section is optional, and can be repeated as many times as necessary.

    This subsection must contain the following two fields:

    action

    Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists.

    Restrictions: Must be a non-empty string.

    command

    Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command.

    Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands.

    Restrictions: Must be a non-empty string.

    Peers Configuration

    The peers configuration section contains a list of the peers managed by a master. This section is only required on a master.

    This is an example peers configuration section:

    <peers>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <ignore_failures>all</ignore_failures>
       </peer>
       <peer>
          <name>machine3</name>
          <type>remote</type>
          <managed>Y</managed>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
          <rcp_command>/usr/bin/scp</rcp_command>
          <rsh_command>/usr/bin/ssh</rsh_command>
          <cback_command>/usr/bin/cback</cback_command>
          <managed_actions>collect, purge</managed_actions>
       </peer>
    </peers>
             

    The following elements are part of the peers configuration section:

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer managed by a master.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    managed

    Indicates whether this peer is managed.

    A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    ignore_failures

    Ignore failure mode for this peer

    The ignore failure mode indicates whether not ready to be staged errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator.

    The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup.

    Restrictions: If set, must be one of "none", "all", "daily", or "weekly".

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    rsh_command

    The rsh-compatible command for this peer.

    The rsh command should be the exact command used for remote shells, including any required options.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section.

    Restrictions: Must be non-empty

    cback_command

    The cback-compatible command for this peer.

    The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default cback command from the options section.

    Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration.

    Restrictions: Must be non-empty

    managed_actions

    Set of actions that are managed for this peer.

    This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge.

    This value only applies if the peer is managed.

    This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section.

    Restrictions: Must be non-empty.

    Collect Configuration

    The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up.

    In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed.

    This is an example collect configuration section:

    <collect>
       <collect_dir>/opt/backup/collect</collect_dir>
       <collect_mode>daily</collect_mode>
       <archive_mode>targz</archive_mode>
       <ignore_file>.cbignore</ignore_file>
       <exclude>
          <abs_path>/etc</abs_path>
          <pattern>.*\.conf</pattern>
       </exclude>
       <file>
          <abs_path>/home/root/.profile</abs_path>
       </file>
       <dir>
          <abs_path>/etc</abs_path>
       </dir>
       <dir>
          <abs_path>/var/log</abs_path>
          <collect_mode>incr</collect_mode>
       </dir>
       <dir>
          <abs_path>/opt</abs_path>
          <collect_mode>weekly</collect_mode>
          <exclude>
             <abs_path>/opt/large</abs_path>
             <rel_path>backup</rel_path>
             <pattern>.*tmp</pattern>
          </exclude>
       </dir>
    </collect>
             

    The following elements are part of the collect configuration section:

    collect_dir

    Directory to collect files into.

    On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory.

    This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form.

    Restrictions: Must be an absolute path

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Default archive mode for collect files.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Default ignore file name.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be non-empty

    recursion_level

    Recursion level to use when collecting directories.

    This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory.

    Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory.

    The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc.

    Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high.

    This field is optional. if it doesn't exist, the backup will use the default recursion level of zero.

    Restrictions: Must be an integer.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however.

    This section is optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    pattern

    A pattern to be recursively excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    file

    A file to be collected.

    This is a subsection which contains information about a specific file to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect file subsection contains the following fields:

    abs_path

    Absolute path of the file to collect.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this file

    The collect mode describes how frequently a file is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this file.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    dir

    A directory to be collected.

    This is a subsection which contains information about a specific directory to be collected (backed up).

    This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed.

    The collect directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to collect.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level.

    The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc.

    Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up.

    Restrictions: Must be an absolute path.

    collect_mode

    Collect mode for this directory

    The collect mode describes how frequently a directory is backed up. See the section called “The Collect Action” (in Chapter2, Basic Concepts) for more information.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    archive_mode

    Archive mode for this directory.

    The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)

    This field is optional. if it doesn't exist, the backup will use the default archive mode.

    Restrictions: Must be one of tar, targz or tarbz2.

    ignore_file

    Ignore file name for this directory.

    The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration.

    The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.

    This field is optional. If it doesn't exist, the backup will use the default ignore file name.

    Restrictions: Must be non-empty

    link_depth

    Link depth value to use for this directory.

    The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc.

    This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed.

    Restrictions: If set, must be an integer ≥ 0.

    dereference

    Whether to dereference soft links.

    If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well.

    This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory.

    This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced.

    Restrictions: Must be a boolean (Y or N).

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    abs_path

    An absolute path to be recursively excluded from the backup.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be an absolute path.

    rel_path

    A relative path to be recursively excluded from the backup.

    The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/something/else.

    If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    Stage Configuration

    The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to.

    This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging.

    This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
    </stage>
             

    This is an example stage configuration section that overrides the default list of peers:

    <stage>
       <staging_dir>/opt/backup/stage</staging_dir>
       <peer>
          <name>machine1</name>
          <type>local</type>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
       <peer>
          <name>machine2</name>
          <type>remote</type>
          <backup_user>backup</backup_user>
          <collect_dir>/opt/backup/collect</collect_dir>
       </peer>
    </stage>
             

    The following elements are part of the stage configuration section:

    staging_dir

    Directory to stage files into.

    This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer daystrom backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself.

    This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space.

    Restrictions: Must be an absolute path

    peer (local version)

    Local client peer in a backup pool.

    This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The local peer subsection must contain the following fields:

    name

    Name of the peer, typically a valid hostname.

    For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a local peer, it must always be local.

    Restrictions: Must be local.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp).

    Restrictions: Must be an absolute path.

    peer (remote version)

    Remote client peer in a backup pool.

    This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call.

    This section can be repeated as many times as is necessary. At least one remote or local peer must be configured.

    Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration.

    The remote peer subsection must contain the following fields:

    name

    Hostname of the peer.

    For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call.

    Restrictions: Must be non-empty, and unique among all peers.

    type

    Type of this peer.

    This value identifies the type of the peer. For a remote peer, it must always be remote.

    Restrictions: Must be remote.

    collect_dir

    Collect directory to stage from for this peer.

    The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command).

    Restrictions: Must be an absolute path.

    backup_user

    Name of backup user on the remote peer.

    This username will be used when copying files from the remote peer via an rsh-based network connection.

    This field is optional. if it doesn't exist, the backup will use the default backup user from the options section.

    Restrictions: Must be non-empty.

    rcp_command

    The rcp-compatible copy command for this peer.

    The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B.

    This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section.

    Restrictions: Must be non-empty.

    Store Configuration

    The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device.

    This is an example store configuration section:

    <store>
       <source_dir>/opt/backup/stage</source_dir>
       <media_type>cdrw-74</media_type>
       <device_type>cdwriter</device_type>
       <target_device>/dev/cdrw</target_device>
       <target_scsi_id>0,0,0</target_scsi_id>
       <drive_speed>4</drive_speed>
       <check_data>Y</check_data>
       <check_media>Y</check_media>
       <warn_midnite>Y</warn_midnite>
       <no_eject>N</no_eject>
       <refresh_media_delay>15</refresh_media_delay>
       <eject_delay>2</eject_delay>
       <blank_behavior>
          <mode>weekly</mode>
          <factor>1.3</factor>
       </blank_behavior>
    </store>
             

    The following elements are part of the store configuration section:

    source_dir

    Directory whose contents should be written to media.

    This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc.

    Restrictions: Must be an absolute path

    device_type

    Type of the device used to write the media.

    This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter).

    This field is optional. If it doesn't exist, the cdwriter device type is assumed.

    Restrictions: If set, must be either cdwriter or dvdwriter.

    media_type

    Type of the media in the device.

    Unless you want to throw away a backup disc every week, you are probably best off using rewritable media.

    You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see the section called “Media and Device Types” (in Chapter2, Basic Concepts).

    Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter.

    target_device

    Filesystem device name for writer device.

    This value is required for both CD writers and DVD writers.

    This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw.

    In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified.

    Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled.

    Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink.

    Restrictions: Must be an absolute path.

    target_scsi_id

    SCSI id for the writer device.

    This value is optional for CD writers and is ignored for DVD writers.

    If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord.

    Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord.

    For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form <method>:scsibus,target,lun.

    An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord).

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Restrictions: If set, must be a valid SCSI identifier.

    drive_speed

    Speed of the drive, i.e. 2 for a 2x device.

    This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed.

    For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media.

    Restrictions: If set, must be an integer ≥ 1.

    check_data

    Whether the media should be validated.

    This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch.

    Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    check_media

    Whether the media should be checked before writing to it.

    By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.)

    If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day.

    Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    no_eject

    Indicates that the writer device should not be ejected.

    Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session).

    For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer.

    Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    refresh_media_delay

    Number of seconds to delay after refreshing media

    This field is optional. If it doesn't exist, no delay will occur.

    Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds.

    Restrictions: If set, must be an integer ≥ 1.

    eject_delay

    Number of seconds to delay after ejecting the tray

    This field is optional. If it doesn't exist, no delay will occur.

    If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly — either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds.

    Restrictions: If set, must be an integer ≥ 1.

    blank_behavior

    Optimized blanking strategy.

    For more information about Cedar Backup's optimized blanking strategy, see the section called “Optimized Blanking Stategy”.

    This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor.

    blank_mode

    Blanking mode.

    Restrictions:Must be one of "daily" or "weekly".

    blank_factor

    Blanking factor.

    Restrictions:Must be a floating point number ≥ 0.

    Purge Configuration

    The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged.

    Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0).

    If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action.

    You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups.

    This is an example purge configuration section:

    <purge>
       <dir>
          <abs_path>/opt/backup/stage</abs_path>
          <retain_days>7</retain_days>
       </dir>
       <dir>
          <abs_path>/opt/backup/collect</abs_path>
          <retain_days>0</retain_days>
       </dir>
    </purge>
             

    The following elements are part of the purge configuration section:

    dir

    A directory to purge within.

    This is a subsection which contains information about a specific directory to purge within.

    This section can be repeated as many times as is necessary. At least one purge directory must be configured.

    The purge directory subsection contains the following fields:

    abs_path

    Absolute path of the directory to purge within.

    The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than retain days days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed.

    The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files.

    Restrictions: Must be an absolute path.

    retain_days

    Number of days to retain old files.

    Once it has been more than this many days since a file was last modified, it is a candidate for removal.

    Restrictions: Must be an integer ≥ 0.

    Extensions Configuration

    The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional.

    Extensions configuration is used to specify extended actions implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions.

    Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400.

    Warning

    Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory.

    If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed — and you would get no warning about this in your email!

    So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the database command-line action. You have been told that this function is called foo.bar(). You think of this backup as a collect kind of action, so you want it to be performed immediately before the collect action.

    To configure this extension, you would list an action with a name database, a module foo, a function name bar and an index of 99.

    This is how the hypothetical action would be configured:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>99</index>
       </action>
    </extensions>
             

    The following elements are part of the extensions configuration section:

    action

    This is a subsection that contains configuration related to a single extended action.

    This section can be repeated as many times as is necessary.

    The action subsection contains the following fields:

    name

    Name of the extended action.

    Restrictions: Must be a non-empty string consisting of only lower-case letters and digits.

    module

    Name of the Python module associated with the extension function.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    function

    Name of the Python extension function within the module.

    Restrictions: Must be a non-empty string and a valid Python identifier.

    index

    Index of action, for execution ordering.

    Restrictions: Must be an integer ≥ 0.

    Setting up a Pool of One

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one).

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidential information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors and also mount the CD/DVD disc to be sure it can be read.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [22] To be safe, always enable the consistency check option in the store configuration section.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file:

    30 00 * * * root  cback all
             

    Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory:

    #/bin/sh
    cback all
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Single machine (pool of one) entry in the file, and change the line so that the backup goes off when you want it to.

    Setting up a Client Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Note

    See AppendixD, Securing Password-less SSH Connections for some important notes on how to optionally further secure password-less SSH connections to your clients.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure the master in your backup pool.

    You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client.

    To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub:

    user@machine> cat ~/.ssh/id_rsa.pub
    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69
    uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH
    HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine
             

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600.

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night).

    You should create a collect directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Use the command cback --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback collect
    30 06 * * * root  cback purge
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [23]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Client machine entries in the file, and change the lines so that the backup goes off when you want it to.

    Setting up a Master Peer Node

    Cedar Backup has been designed to backup entire pools of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client.

    Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own.

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa:

    user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
    Generating public/private rsa key pair.
    Created directory '/home/user/.ssh'.
    Your identification has been saved in /home/user/.ssh/id_rsa.
    Your public key has been saved in /home/user/.ssh/id_rsa.pub.
    The key fingerprint is:
    11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
             

    The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644).

    If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge.

    Note

    Note that the master can treat itself as a client peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master.

    Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a consolidation point machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test connectivity to client machines.

    This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client.

    Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine.

    If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients.

    Step 9: Test your backup.

    Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.)

    When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read.

    You may also want to run cback purge on the master and each client once you have finished validating that everything worked.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [22] To be safe, always enable the consistency check option in the store configuration section.

    Step 10: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file:

    30 00 * * * root  cback collect
    30 02 * * * root  cback stage
    30 04 * * * root  cback store
    30 06 * * * root  cback purge
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. [23]

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Master machine entries in the file, and change the lines so that the backup goes off when you want it to.

    Configuring your Writer Device

    Device Types

    In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware.

    Devices identified by by device name

    For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify <target_device> in configuration. You can either leave <target_scsi_id> blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations — for instance, when the media needs to be mounted to run the consistency check.

    Devices identified by SCSI id

    Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type.

    In order to use a SCSI device with Cedar Backup, you must know both the SCSI id <target_scsi_id> and the device name <target_device>. The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations.

    A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system.

    On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in <target_device> and the SCSI id in <target_scsi_id>, just like for a real SCSI device.

    You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ATA:1,1,1).

    Linux Notes

    On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later).

    Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a method indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values.

    However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation.

    Finding your Linux CD Writer

    Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path:

    cdrecord -prcap dev=/dev/cdrom
             

    Running this command on my hardware gives output that looks like this (just the top few lines):

    Device type    : Removable CD-ROM
    Version        : 0
    Response Format: 2
    Capabilities   : 
    Vendor_info    : 'LITE-ON '
    Identification : 'DVDRW SOHW-1673S'
    Revision       : 'JS02'
    Device seems to be: Generic mmc2 DVD-R/DVD-RW.
    
    Drive capabilities, per MMC-3 page 2A:
             

    If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into <target_device> and leave <target_scsi_id> blank.

    If this doesn't work, you should try to find an ATA or ATAPI device:

    cdrecord -scanbus dev=ATA
    cdrecord -scanbus dev=ATAPI
             

    On my development system, I get a result that looks something like this for ATA:

    scsibus1:
            1,0,0   100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM
            1,1,0   101) *
            1,2,0   102) *
            1,3,0   103) *
            1,4,0   104) *
            1,5,0   105) *
            1,6,0   106) *
            1,7,0   107) *
             

    Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0) into <target_scsi_id>.

    Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO (http://www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/HOWTO/ATA-RAID-HOWTO/index.html) for more information.

    Mac OS X Notes

    On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.[24]

    Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    If you are interested in some of my notes about what works and what doesn't on this platform, check out the documentation in the doc/osx directory in the source distribution.

    Optimized Blanking Stategy

    When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period.

    Since rewritable media can be blanked only a finite number of times before becoming unusable, some users — especially users of rewritable DVD media with its large capacity — may prefer to blank the media less often.

    If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked.

    This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected).

    There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data.

    If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup.

    If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true:

    bytes available / (1 + bytes required) ≤ blanking factor
          

    Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate:

    Total size of weekly backup / Full backup size at the start of the week
          

    This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week:

    /opt/backup/staging# du -s 2007/03/*
    3040    2007/03/01
    3044    2007/03/02
    6812    2007/03/03
    3044    2007/03/04
    3152    2007/03/05
    3056    2007/03/06
    3060    2007/03/07
    3056    2007/03/08
    4776    2007/03/09
    6812    2007/03/10
    11824   2007/03/11
          

    In this case, the ratio is approximately 4:

    6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571
          

    To be safe, you might choose to configure a factor of 5.0.

    Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary.

    If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used.

    Chapter6.Official Extensions

    System Information Extension

    The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a broken system. It is intended to be run either immediately before or immediately after the standard collect action.

    This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2.

    • Currently-installed Debian packages via dpkg --get-selections

    • Disk partition information via fdisk -l

    • System-wide mounted filesystem contents, via ls -laR

    The Debian-specific information is only collected on systems where /usr/bin/dpkg exists.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>sysinfo</name>
          <module>CedarBackup2.extend.sysinfo</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own.

    Amazon S3 Extension

    The Amazon S3 extension writes data to Amazon S3 cloud storage rather than to physical media. It is intended to replace the store action, but you can also use it alongside the store action if you'd prefer to backup your data in more than one place. This extension must be run after the stage action.

    The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to run the aws program. So, make sure you configure the AWS CLI tools as the backup user and not root. (This is different than the amazons3 sync tool extension, which executes AWS CLI command as the same user that is running the tool.)

    When using physical media via the standard store action, there is an implicit limit to the size of a backup, since a backup must fit on a single disc. Since there is no physical media, no such limit exists for Amazon S3 backups. This leaves open the possibility that Cedar Backup might construct an unexpectedly-large backup that the administrator is not aware of. Over time, this might become expensive, either in terms of network bandwidth or in terms of Amazon S3 storage and I/O charges. To mitigate this risk, set a reasonable maximum size using the configuration elements shown below. If the backup fails, you have a chance to review what made the backup larger than you expected, and you can either correct the problem (i.e. remove a large temporary directory that got inadvertently included in the backup) or change configuration to take into account the new "normal" maximum size.

    You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and ${output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user.

    For instance, you can use something like this with GPG:

    /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
          

    The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.:

    dd if=/dev/urandom count=20 bs=1 | xxd -ps
          

    (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>amazons3</name>
          <module>CedarBackup2.extend.amazons3</module>
          <function>executeAction</function>
          <index>201</index> <!-- just after stage -->
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own amazons3 configuration section. This is an example configuration section with encryption disabled:

    <amazons3>
          <s3_bucket>example.com-backup/staging</s3_bucket>
    </amazons3>
          

    The following elements are part of the Amazon S3 configuration section:

    warn_midnite

    Whether to generate warnings for crossing midnite.

    This field indicates whether warnings should be generated if the Amazon S3 operation has to cross a midnite boundary in order to find data to write to the cloud. For instance, a warning would be generated if valid data was only found in the day before or day after the current day.

    Configuration for some users is such that the amazons3 operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something strange might have happened.

    This field is optional. If it doesn't exist, then N will be assumed.

    Restrictions: Must be a boolean (Y or N).

    s3_bucket

    The name of the Amazon S3 bucket that data will be written to.

    This field configures the S3 bucket that your data will be written to. In S3, buckets are named globally. For uniqueness, you would typically use the name of your domain followed by some suffix, such as example.com-backup. If you want, you can specify a subdirectory within the bucket, such as example.com-backup/staging.

    Restrictions: Must be non-empty.

    encrypt

    Command used to encrypt backup data before upload to S3

    If this field is provided, then data will be encrypted before it is uploaded to Amazon S3. You must provide the entire command used to encrypt a file, including the ${input} and ${output} variables. An example GPG command is shown above, but you can use any mechanism you choose. The command will be run as the configured backup user.

    Restrictions: If provided, must be non-empty.

    full_size_limit

    Maximum size of a full backup

    If this field is provided, then a size limit will be applied to full backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a value as described above, greater than zero.

    incr_size_limit

    Maximum size of an incremental backup

    If this field is provided, then a size limit will be applied to incremental backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a value as described above, greater than zero.

    Subversion Extension

    The Subversion Extension is a Cedar Backup extension used to back up Subversion [25] version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode.

    It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>subversion</name>
          <module>CedarBackup2.extend.subversion</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section:

    <subversion>
       <collect_mode>incr</collect_mode>
       <compress_mode>bzip2</compress_mode>
       <repository>
          <abs_path>/opt/public/svn/docs</abs_path>
       </repository>
       <repository>
          <abs_path>/opt/public/svn/web</abs_path>
          <compress_mode>gzip</compress_mode>
       </repository>
       <repository_dir>
          <abs_path>/opt/private/svn</abs_path>
          <collect_mode>daily</collect_mode>
       </repository_dir>
    </subversion>
          

    The following elements are part of the Subversion configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    repository

    A Subversion repository be collected.

    This is a subsection which contains information about a specific Subversion repository to be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    repository_dir

    A Subversion parent repository directory be collected.

    This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository_dir subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    MySQL Extension

    The MySQL Extension is a Cedar Backup extension used to back up MySQL [26] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Note

    This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another.

    The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that all configured databases can be backed up by a single user. Often, the root database user will be used. An alternative is to create a separate MySQL backup user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice.

    Warning

    The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing.

    Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf:

    [mysqldump]
    user     = root
    password = <secret>
             

    Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead.

    As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server:

    [mysqldump]
    host = remote.host
             

    For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done.

    Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mysql</name>
          <module>CedarBackup2.extend.mysql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section:

    <mysql>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration:

    <mysql>
       <user>root</user>
       <password>password</password>
       <compress_mode>bzip2</compress_mode>
       <all>Y</all>
    </mysql>
          

    The following elements are part of the MySQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user).

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    password

    Password associated with the database user.

    This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    PostgreSQL Extension

    The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL [27] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases.

    The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file.

    This extension always produces a full backup. There is currently no facility for making incremental backups.

    Warning

    Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600).

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>postgresql</name>
          <module>CedarBackup2.extend.postgresql</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>Y</all>
    </postgresql>
          

    If you decide to back up specific databases, then you would list them individually, like this:

    <postgresql>
       <compress_mode>bzip2</compress_mode>
       <user>username</user>
       <all>N</all>
       <database>db1</database>
       <database>db2</database>
    </postgresql>
          

    The following elements are part of the PostgreSQL configuration section:

    user

    Database user.

    The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user.

    This value is optional.

    Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf.

    Restrictions: If provided, must be non-empty.

    compress_mode

    Compress mode.

    PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    Restrictions: Must be one of none, gzip or bzip2.

    all

    Indicates whether to back up all databases.

    If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below).

    If you choose this option, the entire database backup will go into one big dump file.

    Restrictions: Must be a boolean (Y or N).

    database

    Named database to be backed up.

    If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file.

    This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y.

    Restrictions: Must be non-empty.

    Mbox Extension

    The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style mbox mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders.

    What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space.

    Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>mbox</name>
          <module>CedarBackup2.extend.mbox</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section:

    <mbox>
       <collect_mode>incr</collect_mode>
       <compress_mode>gzip</compress_mode>
       <file>
          <abs_path>/home/user1/mail/greylist</abs_path>
          <collect_mode>daily</collect_mode>
       </file>
       <dir>
          <abs_path>/home/user2/mail</abs_path>
       </dir>
       <dir>
          <abs_path>/home/user3/mail</abs_path>
          <exclude>
             <rel_path>spam</rel_path>
             <pattern>.*debian.*</pattern>
          </exclude>
       </dir>
    </mbox>
          

    Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively.

    Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed — only relative path exclusions and patterns.

    The following elements are part of the mbox configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    file

    An individual mbox file to be collected.

    This is a subsection which contains information about an individual mbox file to be backed up.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The file subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox file to back up.

    Restrictions: Must be an absolute path.

    dir

    An mbox directory to be collected.

    This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored.

    This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured.

    The dir subsection contains the following fields:

    collect_mode

    Collect mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this file.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the mbox directory to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/user2/mail/SPAM.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    Encrypt Extension

    The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc.

    There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced.

    Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL.

    Warning

    If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe — someplace other than on your backup disc. If you lose your secret key, your backup will be useless.

    I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc.

    Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.)

    An encrypted backup has the same file structure as a normal backup, so all of the instructions in AppendixC, Data Recovery apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual.

    Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/manual.html and gain an understanding of how encryption can help you or hurt you.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>encrypt</name>
          <module>CedarBackup2.extend.encrypt</module>
          <function>executeAction</function>
          <index>301</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section:

    <encrypt>
       <encrypt_mode>gpg</encrypt_mode>
       <encrypt_target>Backup User</encrypt_target>
    </encrypt>
          

    The following elements are part of the Encrypt configuration section:

    encrypt_mode

    Encryption mode.

    This value specifies which encryption mechanism will be used by the extension.

    Currently, only the GPG public-key encryption mechanism is supported.

    Restrictions: Must be gpg.

    encrypt_target

    Encryption target.

    The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r.

    Split Extension

    The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback-span command, which requires individual files within staging directories to each be smaller than a single disc.

    You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback-span.

    The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats.

    Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback-span might put an indivdual file on any disc in a set — the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> 
       <action>
          <name>split</name>
          <module>CedarBackup2.extend.split</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section:

    <split>
       <size_limit>250 MB</size_limit>
       <split_size>100 MB</split_size>
    </split>
          

    The following elements are part of the Split configuration section:

    size_limit

    Size limit.

    Files with a size strictly larger than this limit will be split by the extension.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    split_size

    Split size.

    This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    Restrictions: Must be a size as described above.

    Capacity Extension

    The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused.

    This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions> <action>
          <name>capacity</name>
          <module>CedarBackup2.extend.capacity</module>
          <function>executeAction</function>
          <index>299</index>
       </action>
    </extensions>
          

    This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full:

    <capacity>
       <max_percentage>95.5</max_percentage>
    </capacity>
          

    This example configures the extension to warn if the media has fewer than 16 MB free:

    <capacity>
       <min_bytes>16 MB</min_bytes>
    </capacity>
          

    The following elements are part of the Capacity configuration section:

    max_percentage

    Maximum percentage of the media that may be utilized.

    You must provide either this value or the min_bytes value.

    Restrictions: Must be a floating point number between 0.0 and 100.0

    min_bytes

    Minimum number of free bytes that must be available.

    You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB).

    Valid examples are 10240, 250 MB or 1.1 GB.

    You must provide either this value or the max_percentage value.

    Restrictions: Must be a byte quantity as described above.

    AppendixA.Extension Architecture Interface

    The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension.

    You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file.

    There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this:

    <extensions>
       <action>
          <name>database</name>
          <module>foo</module>
          <function>bar</function>
          <index>101</index>
       </action> 
    </extensions>
          

    In this case, the action database has been mapped to the extension function foo.bar().

    Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules:

    1. Extensions may not write to stdout or stderr using functions such as print or sys.write.

    2. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup2.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled.

    3. Any time an extension invokes a command-line utility, it must be done through the CedarBackup2.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output.

    4. Extensions may not return any value.

    5. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message.

    6. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation.

    7. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types.

    8. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration.

    Extension functions take three arguments: the path to configuration on disk, a CedarBackup2.cli.Options object representing the command-line options in effect, and a CedarBackup2.config.Config object representing parsed standard configuration.

    def function(configPath, options, config):
       """Sample extension function."""
       pass
          

    This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed.

    The interface to the CedarBackup2.cli.Options and CedarBackup2.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3).

    If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions.

    For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this:

    <database>
       <repository>/path/to/repo1</repository>
       <repository>/path/to/repo2</repository>
    </database>
          

    In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality.

    AppendixB.Dependencies

    Python 2.7

    Cedar Backup is written in Python 2 and requires version 2.7 or greater of the language. Python 2.7 was originally released on 4 Jul 2010, and is the last supported release of Python 2. As of this writing, all current Linux and BSD distributions include it.

    If you can't find a package for your system, install from the package source, using the upstream link.

    RSH Server and Client

    Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic rsh client), most users should only use an SSH (secure shell) server and client.

    The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server.

    If you can't find SSH client or server packages for your system, install from the package source, using the upstream link.

    mkisofs

    The mkisofs command is used create ISO filesystem images that can later be written to backup media.

    On Debian platforms, mkisofs is not distributed and genisoimage is used instead. The Debian package takes care of this for you.

    If you can't find a package for your system, install from the package source, using the upstream link.

    cdrecord

    The cdrecord command is used to write ISO images to CD media in a backup device.

    On Debian platforms, cdrecord is not distributed and wodim is used instead. The Debian package takes care of this for you.

    If you can't find a package for your system, install from the package source, using the upstream link.

    dvd+rw-tools

    The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    eject and volname

    The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc.

    The volname command is used to determine the volume name of media in a backup device.

    If you can't find a package for your system, install from the package source, using the upstream link.

    mount and umount

    The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check.

    If you can't find a package for your system, install from the package source, using the upstream link.

    grepmail

    The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders.

    If you can't find a package for your system, install from the package source, using the upstream link.

    gpg

    The gpg command is used by the encrypt extension to encrypt files.

    If you can't find a package for your system, install from the package source, using the upstream link.

    split

    The split command is used by the split extension to split up large files.

    This command is typically part of the core operating system install and is not distributed in a separate package.

    AWS CLI

    AWS CLI is Amazon's official command-line tool for interacting with the Amazon Web Services infrastruture. Cedar Backup uses AWS CLI to copy backup data up to Amazon S3 cloud storage.

    After you install AWS CLI, you need to configure your connection to AWS with an appropriate access id and access key. Amazon provides a good setup guide.

    The initial implementation of the amazons3 extension was written using AWS CLI 1.4. As of this writing, not all Linux distributions include a package for this version. On these platforms, the easiest way to install it is via PIP: apt-get install python-pip, and then pip install awscli. The Debian package includes an appropriate dependency starting with the jessie release.

    Chardet

    The cback-amazons3-sync command relies on the Chardet python package to check filename encoding. You only need this package if you are going to use the sync tool.

    AppendixC.Data Recovery

    Finding your Data

    The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.)

    Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name.

    This is the root directory of my example disc:

    root:/mnt/cdrw# ls -l
    total 4
    drwxr-x---  3 backup backup 4096 Sep 01 06:30 2005/
          

    In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006).

    Within each year directory is one subdirectory for each month represented in the backup.

    root:/mnt/cdrw/2005# ls -l
    total 2
    dr-xr-xr-x  6 root root 2048 Sep 11 05:30 09/
          

    In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005).

    Within each month directory is one subdirectory for each day represented in the backup.

    root:/mnt/cdrw/2005/09# ls -l
    total 8
    dr-xr-xr-x  5 root root 2048 Sep  7 05:30 07/
    dr-xr-xr-x  5 root root 2048 Sep  8 05:30 08/
    dr-xr-xr-x  5 root root 2048 Sep  9 05:30 09/
    dr-xr-xr-x  5 root root 2048 Sep 11 05:30 11/
          

    Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven.

    Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup:

    root:/mnt/cdrw/2005/09/07# ls -l
    total 10
    dr-xr-xr-x  2 root root 2048 Sep  7 02:31 host1/
    -r--r--r--  1 root root    0 Sep  7 03:27 cback.stage
    dr-xr-xr-x  2 root root 4096 Sep  7 02:30 host2/
    dr-xr-xr-x  2 root root 4096 Sep  7 03:23 host3/
          

    In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27.

    Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files collected from Cedar Backup extensions or by other third-party processes on your system.

    root:/mnt/cdrw/2005/09/07/host1# ls -l
    total 157976
    -r--r--r--  1 root root 11206159 Sep  7 02:30 boot.tar.bz2
    -r--r--r--  1 root root        0 Sep  7 02:30 cback.collect
    -r--r--r--  1 root root     3199 Sep  7 02:30 dpkg-selections.txt.bz2
    -r--r--r--  1 root root   908325 Sep  7 02:30 etc.tar.bz2
    -r--r--r--  1 root root      389 Sep  7 02:30 fdisk-l.txt.bz2
    -r--r--r--  1 root root  1003100 Sep  7 02:30 ls-laR.txt.bz2
    -r--r--r--  1 root root    19800 Sep  7 02:30 mysqldump.txt.bz2
    -r--r--r--  1 root root  4133372 Sep  7 02:30 opt-local.tar.bz2
    -r--r--r--  1 root root 44794124 Sep  8 23:34 opt-public.tar.bz2
    -r--r--r--  1 root root 30028057 Sep  7 02:30 root.tar.bz2
    -r--r--r--  1 root root  4747070 Sep  7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2
    -r--r--r--  1 root root   603863 Sep  7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2
    -r--r--r--  1 root root   113484 Sep  7 02:30 var-lib-jspwiki.tar.bz2
    -r--r--r--  1 root root 19556660 Sep  7 02:30 var-log.tar.bz2
    -r--r--r--  1 root root 14753855 Sep  7 02:30 var-mail.tar.bz2
             

    As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent.

    Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before .tar.bz2), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki.

    The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension.

    The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the all flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2).

    Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    Recovering Filesystem Data

    Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before .tar), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration.

    If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week.

    Full Restore

    To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.)

    All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location.

    For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/):

    root:/# bzcat boot.tar.bz2 | tar xvf -
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /.

    root:/tmp# bzcat boot.tar.bz2 | tar xvf -
             

    Again, use zcat or just cat as appropriate.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    Partial Restore

    Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things).

    The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file — since the same file, if changed frequently, would appear in more than one backup.

    Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known contact with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place.

    Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup:

    root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file
             

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    The tvf tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less

    If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there.

    Once you have found your file, extract it using xvf:

    root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file
             

    Again, use zcat or just cat as appropriate.

    Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file.

    For more information, you might want to check out the manpage or GNU info documentation for the tar command.

    Recovering MySQL Data

    MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup.

    Warning

    I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it!

    MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure.

    First, find the backup you are interested in. If you have specified all databases in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration.

    If you are restoring an all databases backup, make sure that you have correctly created the root user and know its password. Then, execute:

    daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them.

    If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root
          

    Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database
          

    Again, use zcat or just cat as appropriate.

    For more information on using MySQL, see the documentation on the MySQL web site, http://mysql.org/, or the manpages for the mysql and mysqldump commands.

    Recovering Subversion Data

    Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show.

    Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is backend-agnostic.

    root:/tmp# svnadmin create --fs-type=fsfs testrepo
          

    Next, load the full backup into the repository:

    root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Follow that with loads for each of the incremental backups:

    root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
    root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Again, use zcat or just cat as appropriate.

    When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800).

    Note

    Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content.

    For more information on using Subversion, see the book Version Control with Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http://subversion.tigris.org/faq.html).

    Recovering Mailbox Data

    Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring.

    Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration.

    There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date.

    Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any).

    Here is an example for a single backed-up file:

    root:/tmp# rm restore.mbox # make sure it's not left over
    root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox
    root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox
    root:/tmp# grepmail -a -u restore.mbox > nodups.mbox
          

    At this point, nodups.mbox contains all of the backed-up messages from /home/user/mail/greylist.

    Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat.

    If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case.

    Recovering Data split by the Split Extension

    The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback-span command.

    The split up files are not difficult to work with. Simply find all of the files — which could be split between multiple discs — and concatenate them together.

    root:/tmp# rm usr-src-software.tar.gz  # make sure it's not there
    root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz
          

    Then, use the resulting file like usual.

    Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include).

    AppendixD.Securing Password-less SSH Connections

    Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients.

    Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers.

    Traditionally, Cedar Backup has relied on a segmenting strategy to minimize the risk. Although the backup typically runs as root — so that all parts of the filesystem can be backed up — we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections.

    With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user.

    Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy — they simply may not have a way to create a login which is only used for backups.

    So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a filter in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd:

    command="command"
       Specifies that the command is executed whenever this key is used for
       authentication.  The command supplied by the user (if any) is ignored.  The
       command is run on a pty if the client requests a pty; otherwise it is run
       without a tty.  If an 8-bit clean channel is required, one must not request
       a pty or should specify no-pty.  A quote may be included in the command by
       quoting it with a backslash.  This option might be useful to restrict
       certain public keys to perform just a specific operation.  An example might
       be a key that permits remote backups but nothing else.  Note that the client
       may specify TCP and/or X11 forwarding unless they are explicitly prohibited.
       Note that this option applies to shell, command or subsystem execution.
          

    Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer.

    So, let's imagine that we have two hosts: master mickey, and peer minnie. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file):

    ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km
    =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9=
    1-2341=-a0sd=-sa0=1z= backup@mickey
          

    This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie.

    To put the filter in place, we add a command option to the key, like this:

    command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp
    3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F
    tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey
          

    Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to.

    A very basic validate-backup script might look something like this:

    #!/bin/bash
    if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then
        ${SSH_ORIGINAL_COMMAND}
    else
       echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]."
       exit 1
    fi
          

    This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed.

    For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master).

    If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this:

    Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile
    OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006
    debug1: Reading configuration data /home/backup/.ssh/config
    debug1: Applying options for daystrom
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: Applying options for *
    debug2: ssh_connect: needpriv 0
          

    Omit the -v and you have your command: scp -f .profile.

    For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer:

    scp -f /path/to/collect/cback.collect
    scp -f /path/to/collect/*
    scp -t /path/to/collect/cback.stage
          

    If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action:

    /usr/bin/cback --full collect
    /usr/bin/cback collect
          

    Of course, you would have to list the actual path to the cback executable — exactly the one listed in the <cback_command> configuration option for your managed peer.

    I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions.

    AppendixE.Copyright

    
    Copyright (c) 2004-2011,2013-2015
    Kenneth J. Pronovici
    
    This work is free; you can redistribute it and/or modify it under
    the terms of the GNU General Public License (the "GPL"), Version 2,
    as published by the Free Software Foundation.
    
    For the purposes of the GPL, the "preferred form of modification"
    for this work is the original Docbook XML text files.  If you
    choose to distribute this work in a compiled form (i.e. if you
    distribute HTML, PDF or Postscript documents based on the original
    Docbook XML text files), you must also consider image files to be
    "source code" if those images are required in order to construct a
    complete and readable compiled version of the work.
    
    This work is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    Copies of the GNU General Public License are available from
    the Free Software Foundation website, http://www.gnu.org/.
    You may also write the Free Software Foundation, Inc., 
    51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA
    
    ====================================================================
    
    		    GNU GENERAL PUBLIC LICENSE
    		       Version 2, June 1991
    
     Copyright (C) 1989, 1991 Free Software Foundation, Inc.
         51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA
     Everyone is permitted to copy and distribute verbatim copies
     of this license document, but changing it is not allowed.
    
    			    Preamble
    
      The licenses for most software are designed to take away your
    freedom to share and change it.  By contrast, the GNU General Public
    License is intended to guarantee your freedom to share and change free
    software--to make sure the software is free for all its users.  This
    General Public License applies to most of the Free Software
    Foundation's software and to any other program whose authors commit to
    using it.  (Some other Free Software Foundation software is covered by
    the GNU Library General Public License instead.)  You can apply it to
    your programs, too.
    
      When we speak of free software, we are referring to freedom, not
    price.  Our General Public Licenses are designed to make sure that you
    have the freedom to distribute copies of free software (and charge for
    this service if you wish), that you receive source code or can get it
    if you want it, that you can change the software or use pieces of it
    in new free programs; and that you know you can do these things.
    
      To protect your rights, we need to make restrictions that forbid
    anyone to deny you these rights or to ask you to surrender the rights.
    These restrictions translate to certain responsibilities for you if you
    distribute copies of the software, or if you modify it.
    
      For example, if you distribute copies of such a program, whether
    gratis or for a fee, you must give the recipients all the rights that
    you have.  You must make sure that they, too, receive or can get the
    source code.  And you must show them these terms so they know their
    rights.
    
      We protect your rights with two steps: (1) copyright the software, and
    (2) offer you this license which gives you legal permission to copy,
    distribute and/or modify the software.
    
      Also, for each author's protection and ours, we want to make certain
    that everyone understands that there is no warranty for this free
    software.  If the software is modified by someone else and passed on, we
    want its recipients to know that what they have is not the original, so
    that any problems introduced by others will not reflect on the original
    authors' reputations.
    
      Finally, any free program is threatened constantly by software
    patents.  We wish to avoid the danger that redistributors of a free
    program will individually obtain patent licenses, in effect making the
    program proprietary.  To prevent this, we have made it clear that any
    patent must be licensed for everyone's free use or not licensed at all.
    
      The precise terms and conditions for copying, distribution and
    modification follow.
    
    		    GNU GENERAL PUBLIC LICENSE
       TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
    
      0. This License applies to any program or other work which contains
    a notice placed by the copyright holder saying it may be distributed
    under the terms of this General Public License.  The "Program", below,
    refers to any such program or work, and a "work based on the Program"
    means either the Program or any derivative work under copyright law:
    that is to say, a work containing the Program or a portion of it,
    either verbatim or with modifications and/or translated into another
    language.  (Hereinafter, translation is included without limitation in
    the term "modification".)  Each licensee is addressed as "you".
    
    Activities other than copying, distribution and modification are not
    covered by this License; they are outside its scope.  The act of
    running the Program is not restricted, and the output from the Program
    is covered only if its contents constitute a work based on the
    Program (independent of having been made by running the Program).
    Whether that is true depends on what the Program does.
    
      1. You may copy and distribute verbatim copies of the Program's
    source code as you receive it, in any medium, provided that you
    conspicuously and appropriately publish on each copy an appropriate
    copyright notice and disclaimer of warranty; keep intact all the
    notices that refer to this License and to the absence of any warranty;
    and give any other recipients of the Program a copy of this License
    along with the Program.
    
    You may charge a fee for the physical act of transferring a copy, and
    you may at your option offer warranty protection in exchange for a fee.
    
      2. You may modify your copy or copies of the Program or any portion
    of it, thus forming a work based on the Program, and copy and
    distribute such modifications or work under the terms of Section 1
    above, provided that you also meet all of these conditions:
    
        a) You must cause the modified files to carry prominent notices
        stating that you changed the files and the date of any change.
    
        b) You must cause any work that you distribute or publish, that in
        whole or in part contains or is derived from the Program or any
        part thereof, to be licensed as a whole at no charge to all third
        parties under the terms of this License.
    
        c) If the modified program normally reads commands interactively
        when run, you must cause it, when started running for such
        interactive use in the most ordinary way, to print or display an
        announcement including an appropriate copyright notice and a
        notice that there is no warranty (or else, saying that you provide
        a warranty) and that users may redistribute the program under
        these conditions, and telling the user how to view a copy of this
        License.  (Exception: if the Program itself is interactive but
        does not normally print such an announcement, your work based on
        the Program is not required to print an announcement.)
    
    These requirements apply to the modified work as a whole.  If
    identifiable sections of that work are not derived from the Program,
    and can be reasonably considered independent and separate works in
    themselves, then this License, and its terms, do not apply to those
    sections when you distribute them as separate works.  But when you
    distribute the same sections as part of a whole which is a work based
    on the Program, the distribution of the whole must be on the terms of
    this License, whose permissions for other licensees extend to the
    entire whole, and thus to each and every part regardless of who wrote it.
    
    Thus, it is not the intent of this section to claim rights or contest
    your rights to work written entirely by you; rather, the intent is to
    exercise the right to control the distribution of derivative or
    collective works based on the Program.
    
    In addition, mere aggregation of another work not based on the Program
    with the Program (or with a work based on the Program) on a volume of
    a storage or distribution medium does not bring the other work under
    the scope of this License.
    
      3. You may copy and distribute the Program (or a work based on it,
    under Section 2) in object code or executable form under the terms of
    Sections 1 and 2 above provided that you also do one of the following:
    
        a) Accompany it with the complete corresponding machine-readable
        source code, which must be distributed under the terms of Sections
        1 and 2 above on a medium customarily used for software interchange; or,
    
        b) Accompany it with a written offer, valid for at least three
        years, to give any third party, for a charge no more than your
        cost of physically performing source distribution, a complete
        machine-readable copy of the corresponding source code, to be
        distributed under the terms of Sections 1 and 2 above on a medium
        customarily used for software interchange; or,
    
        c) Accompany it with the information you received as to the offer
        to distribute corresponding source code.  (This alternative is
        allowed only for noncommercial distribution and only if you
        received the program in object code or executable form with such
        an offer, in accord with Subsection b above.)
    
    The source code for a work means the preferred form of the work for
    making modifications to it.  For an executable work, complete source
    code means all the source code for all modules it contains, plus any
    associated interface definition files, plus the scripts used to
    control compilation and installation of the executable.  However, as a
    special exception, the source code distributed need not include
    anything that is normally distributed (in either source or binary
    form) with the major components (compiler, kernel, and so on) of the
    operating system on which the executable runs, unless that component
    itself accompanies the executable.
    
    If distribution of executable or object code is made by offering
    access to copy from a designated place, then offering equivalent
    access to copy the source code from the same place counts as
    distribution of the source code, even though third parties are not
    compelled to copy the source along with the object code.
    
      4. You may not copy, modify, sublicense, or distribute the Program
    except as expressly provided under this License.  Any attempt
    otherwise to copy, modify, sublicense or distribute the Program is
    void, and will automatically terminate your rights under this License.
    However, parties who have received copies, or rights, from you under
    this License will not have their licenses terminated so long as such
    parties remain in full compliance.
    
      5. You are not required to accept this License, since you have not
    signed it.  However, nothing else grants you permission to modify or
    distribute the Program or its derivative works.  These actions are
    prohibited by law if you do not accept this License.  Therefore, by
    modifying or distributing the Program (or any work based on the
    Program), you indicate your acceptance of this License to do so, and
    all its terms and conditions for copying, distributing or modifying
    the Program or works based on it.
    
      6. Each time you redistribute the Program (or any work based on the
    Program), the recipient automatically receives a license from the
    original licensor to copy, distribute or modify the Program subject to
    these terms and conditions.  You may not impose any further
    restrictions on the recipients' exercise of the rights granted herein.
    You are not responsible for enforcing compliance by third parties to
    this License.
    
      7. If, as a consequence of a court judgment or allegation of patent
    infringement or for any other reason (not limited to patent issues),
    conditions are imposed on you (whether by court order, agreement or
    otherwise) that contradict the conditions of this License, they do not
    excuse you from the conditions of this License.  If you cannot
    distribute so as to satisfy simultaneously your obligations under this
    License and any other pertinent obligations, then as a consequence you
    may not distribute the Program at all.  For example, if a patent
    license would not permit royalty-free redistribution of the Program by
    all those who receive copies directly or indirectly through you, then
    the only way you could satisfy both it and this License would be to
    refrain entirely from distribution of the Program.
    
    If any portion of this section is held invalid or unenforceable under
    any particular circumstance, the balance of the section is intended to
    apply and the section as a whole is intended to apply in other
    circumstances.
    
    It is not the purpose of this section to induce you to infringe any
    patents or other property right claims or to contest validity of any
    such claims; this section has the sole purpose of protecting the
    integrity of the free software distribution system, which is
    implemented by public license practices.  Many people have made
    generous contributions to the wide range of software distributed
    through that system in reliance on consistent application of that
    system; it is up to the author/donor to decide if he or she is willing
    to distribute software through any other system and a licensee cannot
    impose that choice.
    
    This section is intended to make thoroughly clear what is believed to
    be a consequence of the rest of this License.
    
      8. If the distribution and/or use of the Program is restricted in
    certain countries either by patents or by copyrighted interfaces, the
    original copyright holder who places the Program under this License
    may add an explicit geographical distribution limitation excluding
    those countries, so that distribution is permitted only in or among
    countries not thus excluded.  In such case, this License incorporates
    the limitation as if written in the body of this License.
    
      9. The Free Software Foundation may publish revised and/or new versions
    of the General Public License from time to time.  Such new versions will
    be similar in spirit to the present version, but may differ in detail to
    address new problems or concerns.
    
    Each version is given a distinguishing version number.  If the Program
    specifies a version number of this License which applies to it and "any
    later version", you have the option of following the terms and conditions
    either of that version or of any later version published by the Free
    Software Foundation.  If the Program does not specify a version number of
    this License, you may choose any version ever published by the Free Software
    Foundation.
    
      10. If you wish to incorporate parts of the Program into other free
    programs whose distribution conditions are different, write to the author
    to ask for permission.  For software which is copyrighted by the Free
    Software Foundation, write to the Free Software Foundation; we sometimes
    make exceptions for this.  Our decision will be guided by the two goals
    of preserving the free status of all derivatives of our free software and
    of promoting the sharing and reuse of software generally.
    
    			    NO WARRANTY
    
      11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
    FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
    OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
    PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
    OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
    MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
    TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
    PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
    REPAIR OR CORRECTION.
    
      12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
    WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
    REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
    INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
    OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
    TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
    YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
    PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGES.
    
    		     END OF TERMS AND CONDITIONS
    
    ====================================================================
    
          
    CedarBackup2-2.26.5/doc/manual/ch05s03.html0000664000175000017500000004267212642035647021577 0ustar pronovicpronovic00000000000000Setting up a Pool of One

    Setting up a Pool of One

    Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one).

    Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked.

    Note: all of these configuration steps should be run as the root user, unless otherwise indicated.

    Tip

    This setup procedure discusses how to set up Cedar Backup in the normal case for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual.

    Step 1: Decide when you will run your backup.

    There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later.

    Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc.

    Warning

    Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be confused until the next week begins, or until you re-run the backup using the --full flag.

    Step 2: Make sure email works.

    Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur.

    In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors.

    Step 3: Configure your writer device.

    Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation.

    Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations.

    See the section called “Configuring your Writer Device” for more information on writer devices and how they are configured.

    Note

    There is no need to set up your CD/DVD device if you have decided not to execute the store action.

    Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers.

    Step 4: Configure your backup user.

    Choose a user to be used for backups. Some platforms may come with a ready made backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user.

    Note

    Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one.

    Step 5: Create your backup tree.

    Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space.

    You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this:

    /opt/
         backup/
                collect/
                stage/
                tmp/
             

    If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700.

    Note

    You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my dumping ground for filesystems that Debian does not manage.

    Some users have requested that the Debian packages set up a more standard location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp.

    Step 6: Create the Cedar Backup configuration file.

    Following the instructions in the section called “Configuration File Format” (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge.

    The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option).

    Warning

    Configuration files should always be writable only by root (or by the file owner, if the owner is not root).

    If you intend to place confidential information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root).

    Step 7: Validate the Cedar Backup configuration file.

    Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries.

    Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is opened must be closed appropriately.

    Step 8: Test your backup.

    Place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully.

    Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors and also mount the CD/DVD disc to be sure it can be read.

    If Cedar Backup ever completes normally but the disc that is created is not usable, please report this as a bug. [22] To be safe, always enable the consistency check option in the store configuration section.

    Step 9: Modify the backup cron jobs.

    Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file:

    30 00 * * * root  cback all
             

    Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory:

    #/bin/sh
    cback all
             

    You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously.

    Note

    For general information about using cron, see the manpage for crontab(5).

    On a Debian system, execution of daily backups is controlled by the file /etc/cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the Single machine (pool of one) entry in the file, and change the line so that the backup goes off when you want it to.

    CedarBackup2-2.26.5/doc/manual/manual.txt0000664000175000017500000104710412642035647021636 0ustar pronovicpronovic00000000000000Cedar Backup 2 Software Manual Kenneth J. Pronovici Copyright 2005-2008,2013-2015 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA ------------------------------------------------------------------------------- Table of Contents Preface Purpose Audience Conventions Used in This Book Typographic Conventions Icons Organization of This Manual Acknowledgments 1. Introduction What is Cedar Backup? Migrating from Version 2 to Version 3 How to Get Support History 2. Basic Concepts General Architecture Data Recovery Cedar Backup Pools The Backup Process The Collect Action The Stage Action The Store Action The Purge Action The All Action The Validate Action The Initialize Action The Rebuild Action Coordination between Master and Clients Managed Backups Media and Device Types Incremental Backups Extensions 3. Installation Background Installing on a Debian System Installing from Source Installing Dependencies Installing the Source Package 4. Command Line Tools Overview The cback command Introduction Syntax Switches Actions The cback-amazons3-sync command Introduction Syntax Switches The cback-span command Introduction Syntax Switches Using cback-span Sample run 5. Configuration Overview Configuration File Format Sample Configuration File Reference Configuration Options Configuration Peers Configuration Collect Configuration Stage Configuration Store Configuration Purge Configuration Extensions Configuration Setting up a Pool of One Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Client Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure the master in your backup pool. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Master Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test connectivity to client machines. Step 9: Test your backup. Step 10: Modify the backup cron jobs. Configuring your Writer Device Device Types Devices identified by by device name Devices identified by SCSI id Linux Notes Finding your Linux CD Writer Mac OS X Notes Optimized Blanking Stategy 6. Official Extensions System Information Extension Amazon S3 Extension Subversion Extension MySQL Extension PostgreSQL Extension Mbox Extension Encrypt Extension Split Extension Capacity Extension A. Extension Architecture Interface B. Dependencies C. Data Recovery Finding your Data Recovering Filesystem Data Full Restore Partial Restore Recovering MySQL Data Recovering Subversion Data Recovering Mailbox Data Recovering Data split by the Split Extension D. Securing Password-less SSH Connections E. Copyright Preface Table of Contents Purpose Audience Conventions Used in This Book Typographic Conventions Icons Organization of This Manual Acknowledgments Purpose This software manual has been written to document version 2 of Cedar Backup, originally released in early 2005. Audience This manual has been written for computer-literate administrators who need to use and configure Cedar Backup on their Linux or UNIX-like system. The examples in this manual assume the reader is relatively comfortable with UNIX and command-line interfaces. Conventions Used in This Book This section covers the various conventions used in this manual. Typographic Conventions Term Used for first use of important terms. Command Used for commands, command output, and switches Replaceable Used for replaceable items in code and text Filenames Used for file and directory names Icons Note This icon designates a note relating to the surrounding text. Tip This icon designates a helpful tip relating to the surrounding text. Warning This icon designates a warning relating to the surrounding text. Organization of This Manual Chapter1, Introduction Provides some some general history about Cedar Backup, what needs it is intended to meet, how to get support, and how to migrate from version 2 to version 3. Chapter2, Basic Concepts Discusses the basic concepts of a Cedar Backup infrastructure, and specifies terms used throughout the rest of the manual. Chapter3, Installation Explains how to install the Cedar Backup package either from the Python source distribution or from the Debian package. Chapter4, Command Line Tools Discusses the various Cedar Backup command-line tools, including the primary cback command. Chapter5, Configuration Provides detailed information about how to configure Cedar Backup. Chapter6, Official Extensions Describes each of the officially-supported Cedar Backup extensions. AppendixA, Extension Architecture Interface Specifies the Cedar Backup extension architecture interface, through which third party developers can write extensions to Cedar Backup. AppendixB, Dependencies Provides some additional information about the packages which Cedar Backup relies on, including information about how to find documentation and packages on non-Debian systems. AppendixC, Data Recovery Cedar Backup provides no facility for restoring backups, assuming the administrator can handle this infrequent task. This appendix provides some notes for administrators to work from. AppendixD, Securing Password-less SSH Connections Password-less SSH connections are a necessary evil when remote backup processes need to execute without human interaction. This appendix describes some ways that you can reduce the risk to your backup pool should your master machine be compromised. Acknowledgments The structure of this manual and some of the basic boilerplate has been taken from the book Version Control with Subversion. Thanks to the authors (and O'Reilly) for making this excellent reference available under a free and open license. Chapter1.Introduction Table of Contents What is Cedar Backup? Migrating from Version 2 to Version 3 How to Get Support History ?Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it.?? Linus Torvalds, at the release of Linux 2.0.8 in July of 1996. What is Cedar Backup? Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough (and almost all hardware is today), Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python 2 programming language. There are many different backup software implementations out there in the open source world. Cedar Backup aims to fill a niche: it aims to be a good fit for people who need to back up a limited amount of important data on a regular basis. Cedar Backup isn't for you if you want to back up your huge MP3 collection every night, or if you want to back up a few hundred machines. However, if you administer a small set of machines and you want to run daily incremental backups for things like system configuration, current email, small web sites, Subversion or Mercurial repositories, or small MySQL databases, then Cedar Backup is probably worth your time. Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 2, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python 2 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images or talking to the Amazon S3 infrastructure. A full list of dependencies is provided in the section called ?Installing Dependencies?. Migrating from Version 2 to Version 3 The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix-and-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end-of-life in 2020, but you should plan to migrate sooner than that if possible. A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used "cback", version 3 uses "cback3": cback3.conf instead of cback.conf, cback3.log instead of cback.log, etc. So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup. How to Get Support Cedar Backup is open source software that is provided to you at no cost. It is provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. However, that said, someone can usually help you solve whatever problems you might see. If you experience a problem, your best bet is to file an issue in the issue tracker at BitBucket. ^[1] When the source code was hosted at SourceForge, there was a mailing list. However, it was very lightly used in the last years before I abandoned SourceForge, and I have decided not to replace it. If you are not comfortable discussing your problem in public or listing it in a public database, or if you need to send along information that you do not want made public, then you can write . That mail will go directly to me. If you write the support address about a bug, a ?scrubbed? bug report will eventually end up in the public bug database anyway, so if at all possible you should use the public reporting mechanisms. One of the strengths of the open-source software development model is its transparency. Regardless of how you report your problem, please try to provide as much information as possible about the behavior you observed and the environment in which the problem behavior occurred. ^[2] In particular, you should provide: the version of Cedar Backup that you are using; how you installed Cedar Backup (i.e. Debian package, source package, etc.); the exact command line that you executed; any error messages you received, including Python stack traces (if any); and relevant sections of the Cedar Backup log. It would be even better if you could describe exactly how to reproduce the problem, for instance by including your entire configuration file and/or specific information about your system that might relate to the problem. However, please do not provide huge sections of debugging logs unless you are sure they are relevant or unless someone asks for them. Tip Sometimes, the error that Cedar Backup displays can be rather cryptic. This is because under internal error conditions, the text related to an exception might get propogated all of the way up to the user interface. If the message you receive doesn't make much sense, or if you suspect that it results from an internal error, you might want to re-run Cedar Backup with the --stack option. This forces Cedar Backup to dump the entire Python stack trace associated with the error, rather than just printing the last message it received. This is good information to include along with a bug report, as well. History Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup. These scripts met an immediate need (which was to back up skyjammer.com and some personal machines) but proved to be unstable, overly verbose and rather difficult to maintain. In early 2002, work began on a rewrite of kbackup. The goal was to address many of the shortcomings of the original application, as well as to clean up the code and make it available to the general public. While doing research related to code I could borrow or base the rewrite on, I discovered that there was already an existing backup package with the name kbackup, so I decided to change the name to Cedar Backup instead. Because I had become fed up with the prospect of maintaining a large volume of Perl code, I decided to abandon that language in favor of Python. ^[3] At the time, I chose Python mostly because I was interested in learning it, but in retrospect it turned out to be a very good decision. From my perspective, Python has almost all of the strengths of Perl, but few of its inherent weaknesses (I feel that primarily, Python code often ends up being much more readable than Perl code). Around this same time, skyjammer.com and cedar-solutions.com were converted to run Debian GNU/Linux (potato) ^[4] and I entered the Debian new maintainer queue, so I also made it a goal to implement Debian packages along with a Python source distribution for the new release. Version 1.0 of Cedar Backup was released in June of 2002. We immediately began using it to back up skyjammer.com and cedar-solutions.com, where it proved to be much more stable than the original code. In the meantime, I continued to improve as a Python programmer and also started doing a significant amount of professional development in Java. It soon became obvious that the internal structure of Cedar Backup 1.0, while much better than kbackup, still left something to be desired. In November 2003, I began an attempt at cleaning up the codebase. I converted all of the internal documentation to use Epydoc, ^[5] and updated the code to use the newly-released Python logging package ^[6] after having a good experience with Java's log4j. However, I was still not satisfied with the code, which did not lend itself to the automated regression testing I had used when working with junit in my Java code. So, rather than releasing the cleaned-up code, I instead began another ground-up rewrite in May 2004. With this rewrite, I applied everything I had learned from other Java and Python projects I had undertaken over the last few years. I structured the code to take advantage of Python's unique ability to blend procedural code with object-oriented code, and I made automated unit testing a primary requirement. The result was the 2.0 release, which is cleaner, more compact, better focused, and better documented than any release before it. Utility code is less application-specific, and is now usable as a general-purpose library. The 2.0 release also includes a complete regression test suite of over 3000 tests, which will help to ensure that quality is maintained as development continues into the future. ^[7] The 3.0 release of Cedar Backup is a Python 3 conversion of the 2.0 release, with minimal additional functionality. The conversion from Python 2 to Python 3 started in mid-2015, about 5 years before the anticipated deprecation of Python 2 in 2020. Most users should consider transitioning to the 3.0 release. ------------------------------------------------------------------------------- ^[1] See https://bitbucket.org/cedarsolutions/cedar-backup2/issues. ^[2] See Simon Tatham's excellent bug reporting tutorial: http:// www.chiark.greenend.org.uk/~sgtatham/bugs.html . ^[3] See http://www.python.org/ . ^[4] Debian's stable releases are named after characters in the Toy Story movie. ^[5] Epydoc is a Python code documentation tool. See http:// epydoc.sourceforge.net/. ^[6] See http://docs.python.org/lib/module-logging.html . ^[7] Tests are implemented using Python's unit test framework. See http:// docs.python.org/lib/module-unittest.html. Chapter2.Basic Concepts Table of Contents General Architecture Data Recovery Cedar Backup Pools The Backup Process The Collect Action The Stage Action The Store Action The Purge Action The All Action The Validate Action The Initialize Action The Rebuild Action Coordination between Master and Clients Managed Backups Media and Device Types Incremental Backups Extensions General Architecture Cedar Backup is architected as a Python package (library) and a single executable (a Python script). The Python package provides both application-specific code and general utilities that can be used by programs other than Cedar Backup. It also includes modules that can be used by third parties to extend Cedar Backup or provide related functionality. The cback script is designed to run as root, since otherwise it's difficult to back up system directories or write to the CD/DVD device. However, pains are taken to use the backup user's effective user id (specified in configuration) when appropriate. Note: this does not mean that cback runs setuid^[8] or setgid . However, all files on disk will be owned by the backup user, and and all rsh-based network connections will take place as the backup user. The cback script is configured via command-line options and an XML configuration file on disk. The configuration file is normally stored in /etc/ cback.conf, but this path can be overridden at runtime. See Chapter5, Configuration for more information on how Cedar Backup is configured. Warning You should be aware that backups to CD/DVD media can probably be read by any user which has permissions to mount the CD/DVD writer. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. See also the section called ?Encrypt Extension?. Data Recovery Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in AppendixC, Data Recovery) can handle the task of restoring their own system, using the standard system tools at hand. If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category. My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need. Cedar Backup Pools There are two kinds of machines in a Cedar Backup pool. One machine (the master ) has a CD or DVD writer on it and writes the backup to disc. The others ( clients) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are called peer machines. Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one) and in fact more users seem to use it like this than any other way. The Backup Process The Cedar Backup backup process is structured in terms of a set of decoupled actions which execute independently (based on a schedule in cron) rather than through some highly coordinated flow of control. This design decision has both positive and negative consequences. On the one hand, the code is much simpler and can choose to simply abort or log an error if its expectations are not met. On the other hand, the administrator must coordinate the various actions during initial set-up. See the section called ?Coordination between Master and Clients? (later in this chapter) for more information on this subject. A standard backup run consists of four steps (actions), some of which execute on the master machine, and some of which execute on one or more client machines. These actions are: collect, stage, store and purge. In general, more than one action may be specified on the command-line. If more than one action is specified, then actions will be taken in a sensible order (generally collect, stage, store, purge). A special all action is also allowed, which implies all of the standard actions in the same sensible order. The cback command also supports several actions that are not part of the standard backup run and cannot be executed along with any other actions. These actions are validate, initialize and rebuild. All of the various actions are discussed further below. See Chapter5, Configuration for more information on how a backup run is configured. Flexibility Cedar Backup was designed to be flexible. It allows you to decide for yourself which backup steps you care about executing (and when you execute them), based on your own situation and your own priorities. As an example, I always back up every machine I own. I typically keep 7-10 days of staging directories around, but switch CD/DVD media mostly every week. That way, I can periodically take a disc off-site in case the machine gets stolen or damaged. If you're not worried about these risks, then there's no need to write to disc. In fact, some users prefer to use their master machine as a simple ? consolidation point?. They don't back up any data on the master, and don't write to disc at all. They just use Cedar Backup to handle the mechanics of moving backed-up data to a central location. This isn't quite what Cedar Backup was written to do, but it is flexible enough to meet their needs. The Collect Action The collect action is the first action in a standard backup run. It executes on both master and client nodes. Based on configuration, this action traverses the peer's filesystem and gathers files to be backed up. Each configured high-level directory is collected up into its own tar file in the collect directory. The tarfiles can either be uncompressed (.tar) or compressed with either gzip (.tar.gz) or bzip2 (.tar.bz2). There are three supported collect modes: daily, weekly and incremental. Directories configured for daily backups are backed up every day. Directories configured for weekly backups are backed up on the first day of the week. Directories configured for incremental backups are traversed every day, but only the files which have changed (based on a saved-off SHA hash) are actually backed up. Collect configuration also allows for a variety of ways to filter files and directories out of the backup. For instance, administrators can configure an ignore indicator file ^[9] or specify absolute paths or filename patterns ^[10] to be excluded. You can even configure a backup ?link farm? rather than explicitly listing files and directories in configuration. This action is optional on the master. You only need to configure and execute the collect action on the master if you have data to back up on that machine. If you plan to use the master only as a ?consolidation point? to collect data from other machines, then there is no need to execute the collect action there. If you run the collect action on the master, it behaves the same there as anywhere else, and you have to stage the master's collected data just like any other client (typically by configuring a local peer in the stage action). The Stage Action The stage action is the second action in a standard backup run. It executes on the master peer node. The master works down the list of peers in its backup pool and stages (copies) the collected backup files from each of them into a daily staging directory by peer name. For the purposes of this action, the master node can be configured to treat itself as a client node. If you intend to back up data on the master, configure the master as a local peer. Otherwise, just configure each of the clients as a remote peer. Local and remote client peers are treated differently. Local peer collect directories are assumed to be accessible via normal copy commands (i.e. on a mounted filesystem) while remote peer collect directories are accessed via an RSH-compatible command such as ssh. If a given peer is not ready to be staged, the stage process will log an error, abort the backup for that peer, and then move on to its other peers. This way, one broken peer cannot break a backup for other peers which are up and running. Keep in mind that Cedar Backup is flexible about what actions must be executed as part of a backup. If you would prefer, you can stop the backup process at this step, and skip the store step. In this case, the staged directories will represent your backup rather than a disc. Note Directories ?collected? by another process can be staged by Cedar Backup. If the file cback.collect exists in a collect directory when the stage action is taken, then that directory will be staged. The Store Action The store action is the third action in a standard backup run. It executes on the master peer node. The master machine determines the location of the current staging directory, and then writes the contents of that staging directory to disc. After the contents of the directory have been written to disc, an optional validation step ensures that the write was successful. If the backup is running on the first day of the week, if the drive does not support multisession discs, or if the --full option is passed to the cback command, the disc will be rebuilt from scratch. Otherwise, a new ISO session will be added to the disc each day the backup runs. This action is entirely optional. If you would prefer to just stage backup data from a set of peers to a master machine, and have the staged directories represent your backup rather than a disc, this is fine. Warning The store action is not supported on the Mac OS X (darwin) platform. On that platform, the ?automount? function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality works on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. Current Staging Directory The store action tries to be smart about finding the current staging directory. It first checks the current day's staging directory. If that directory exists, and it has not yet been written to disc (i.e. there is no store indicator), then it will be used. Otherwise, the store action will look for an unused staging directory for either the previous day or the next day, in that order. A warning will be written to the log under these circumstances (controlled by the configuration value). This behavior varies slightly when the --full option is in effect. Under these circumstances, any existing store indicator will be ignored. Also, the store action will always attempt to use the current day's staging directory, ignoring any staging directories for the previous day or the next day. This way, running a full store action more than once concurrently will always produce the same results. (You might imagine a use case where a person wants to make several copies of the same full backup.) The Purge Action The purge action is the fourth and final action in a standard backup run. It executes both on the master and client peer nodes. Configuration specifies how long to retain files in certain directories, and older files and empty directories are purged. Typically, collect directories are purged daily, and stage directories are purged weekly or slightly less often (if a disc gets corrupted, older backups may still be available on the master). Some users also choose to purge the configured working directory (which is used for temporary files) to eliminate any leftover files which might have resulted from changes to configuration. The All Action The all action is a pseudo-action which causes all of the actions in a standard backup run to be executed together in order. It cannot be combined with any other actions on the command line. Extensions cannot be executed as part of the all action. If you need to execute an extended action, you must specify the other actions you want to run individually on the command line. ^[11] The all action does not have its own configuration. Instead, it relies on the individual configuration sections for all of the other actions. The Validate Action The validate action is used to validate configuration on a particular peer node, either master or client. It cannot be combined with any other actions on the command line. The validate action checks that the configuration file can be found, that the configuration file is valid, and that certain portions of the configuration file make sense (for instance, making sure that specified users exist, directories are readable and writable as necessary, etc.). The Initialize Action The initialize action is used to initialize media for use with Cedar Backup. This is an optional step. By default, Cedar Backup does not need to use initialized media and will write to whatever media exists in the writer device. However, if the ?check media? store configuration option is set to true, Cedar Backup will check the media before writing to it and will error out if the media has not been initialized. Initializing the media consists of writing a mostly-empty image using a known media label (the media label will begin with ?CEDAR BACKUP?). Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't make any sense to initialize media that cannot be rewritten (CD-R, DVD+R), since Cedar Backup would then not be able to use that media for a backup. You can still configure Cedar Backup to check non-rewritable media; in this case, the check will also pass if the media is apparently unused (i.e. has no media label). The Rebuild Action The rebuild action is an exception-handling action that is executed independent of a standard backup run. It cannot be combined with any other actions on the command line. The rebuild action attempts to rebuild ?this week's? disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason. To decide what data to write to disc again, the rebuild action looks back and finds the first day of the current week. Then, it finds any remaining staging directories between that date and the current date. If any staging directories are found, they are all written to disc in one big ISO session. The rebuild action does not have its own configuration. It relies on configuration for other other actions, especially the store action. Coordination between Master and Clients Unless you are using Cedar Backup to manage a ?pool of one?, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult ? it mostly consists of making sure that operations happen in the right order ? but some users are suprised that it is required and want to know why Cedar Backup can't just ?take care of it for me?. Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged. Managed Backups Cedar Backup also supports an optional feature called the ?managed backup?. This feature is intended for use with remote clients where cron is not available. When managed backups are enabled, managed clients must still be configured as usual. However, rather than using a cron job on the client to execute the collect and purge actions, the master executes these actions on the client via a remote shell. To make this happen, first set up one or more managed clients in Cedar Backup configuration. Then, invoke Cedar Backup with the --managed command-line option. Whenever Cedar Backup invokes an action locally, it will invoke the same action on each of the managed clients. Technically, this feature works for any client, not just clients that don't have cron available. Used this way, it can simplify the setup process, because cron only has to be configured on the master. For some users, that may be motivation enough to use this feature all of the time. However, please keep in mind that this feature depends on a stable network. If your network connection drops, your backup will be interrupted and will not be complete. It is even possible that some of the Cedar Backup metadata (like incremental backup state) will be corrupted. The risk is not high, but it is something you need to be aware of if you choose to use this optional feature. Media and Device Types Cedar Backup is focused around writing backups to CD or DVD media using a standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred to as the media, and the CD/DVD drive is referred to as the device or sometimes the backup device. ^[12] When using a new enough backup device, a new ?multisession? ISO image ^[13] is written to the media on the first day of the week, and then additional multisession images are added to the media each day that Cedar Backup runs. This way, the media is complete and usable at the end of every backup run, but a single disc can be used all week long. If your backup device does not support multisession images ? which is really unusual today ? then a new ISO image will be written to the media each time Cedar Backup runs (and you should probably confine yourself to the ?daily? backup mode to avoid losing data). Cedar Backup currently supports four different kinds of CD media: cdr-74 74-minute non-rewritable CD media cdrw-74 74-minute rewritable CD media cdr-80 80-minute non-rewritable CD media cdrw-80 80-minute rewritable CD media I have chosen to support just these four types of CD media because they seem to be the most ?standard? of the various types commonly sold in the U.S. as of this writing (early 2005). If you regularly use an unsupported media type and would like Cedar Backup to support it, send me information about the capacity of the media in megabytes (MB) and whether it is rewritable. Cedar Backup also supports two kinds of DVD media: dvd+r Single-layer non-rewritable DVD+R media dvd+rw Single-layer rewritable DVD+RW media The underlying growisofs utility does support other kinds of media (including DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R and DVD+RW media. I don't support these other kinds of media because I haven't had any opportunity to work with them. The same goes for dual-layer media of any type. Incremental Backups Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the --full option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis. In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value ^[14] for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/ checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged. Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the --full option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week. Extensions Imagine that there is a third party developer who understands how to back up a certain kind of database repository. This third party might want to integrate his or her specialized backup into the Cedar Backup process, perhaps thinking of the database backup as a sort of ?collect? step. Prior to Cedar Backup version 2, any such integration would have been completely independent of Cedar Backup itself. The ?external? backup functionality would have had to maintain its own configuration and would not have had access to any Cedar Backup configuration. Starting with version 2, Cedar Backup allows extensions to the backup process. An extension is an action that isn't part of the standard backup process (i.e. not collect, stage, store or purge), but can be executed by Cedar Backup when properly configured. Extension authors implement an ?action process? function with a certain interface, and are allowed to add their own sections to the Cedar Backup configuration file, so that all backup configuration can be centralized. Then, the action process function is associated with an action name which can be executed from the cback command line like any other action. Hopefully, as the Cedar Backup user community grows, users will contribute their own extensions back to the community. Well-written general-purpose extensions will be accepted into the official codebase. Note Users should see Chapter5, Configuration for more information on how extensions are configured, and Chapter6, Official Extensions for details on all of the officially-supported extensions. Developers may be interested in AppendixA, Extension Architecture Interface. ------------------------------------------------------------------------------- ^[8] See http://en.wikipedia.org/wiki/Setuid ^[9] Analagous to .cvsignore in CVS ^[10] In terms of Python regular expressions ^[11] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in surprising behavior. I am not planning to change the way this works. ^[12] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has since died, and I currently develop using a Lite-On 1673S DVDRW drive. ^[13] An ISO image is the standard way of creating a filesystem to be copied to a CD or DVD. It is essentially a ?filesystem-within-a-file? and many UNIX operating systems can actually mount ISO image files just like hard drives, floppy disks or actual CDs. See Wikipedia for more information: http:// en.wikipedia.org/wiki/ISO_image. ^[14] The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: http://en.wikipedia.org/wiki/SHA-1. Chapter3.Installation Table of Contents Background Installing on a Debian System Installing from Source Installing Dependencies Installing the Source Package Background There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc. If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself. Non-Linux Platforms Cedar Backup has been developed on a Debian GNU/Linux system and is primarily supported on Debian and other Linux systems. However, since it is written in portable Python 2, it should run without problems on just about any UNIX-like operating system. In particular, full Cedar Backup functionality is known to work on Debian and SuSE Linux systems, and client functionality is also known to work on FreeBSD and Mac OS X systems. To run a Cedar Backup client, you really just need a working Python 2 installation. To run a Cedar Backup master, you will also need a set of other executables, most of which are related to building and writing CD/DVD images. A full list of dependencies is provided further on in this chapter. Installing on a Debian System The easiest way to install Cedar Backup onto a Debian system is by using a tool such as apt-get or aptitude. If you are running a Debian release which contains Cedar Backup, you can use your normal Debian mirror as an APT data source. (The Debian ?etch? release is the first release to contain Cedar Backup 2.) Otherwise, you need to install from the Cedar Solutions APT data source. ^[15] To do this, add the Cedar Solutions APT data source to your /etc/apt/sources.list file. After you have configured the proper APT data source, install Cedar Backup using this set of commands: $ apt-get update $ apt-get install cedar-backup2 cedar-backup2-doc Several of the Cedar Backup dependencies are listed as ?recommended? rather than required. If you are installing Cedar Backup on a master machine, you must install some or all of the recommended dependencies, depending on which actions you intend to execute. The stage action normally requires ssh, and the store action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must also install some sort of ssh server if a remote master will collect backups from them. If you would prefer, you can also download the .deb files and install them by hand with a tool such as dpkg. You can find these files files in the Cedar Solutions APT source. In either case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration. Note The Debian package-management tools must generally be run as root. It is safe to install Cedar Backup to a non-standard location and run it as a non-root user. However, to do this, you must install the source distribution instead of the Debian package. Installing from Source On platforms other than Debian, Cedar Backup is installed from a Python source distribution. ^[16] You will have to manage dependencies on your own. Tip Many UNIX-like distributions provide an automatic or semi-automatic way to install packages like the ones Cedar Backup requires (think RPMs for Mandrake or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD ports system). If you are not sure how to install these packages on your system, you might want to check out AppendixB, Dependencies. This appendix provides links to ?upstream? source packages, plus as much information as I have been able to gather about packages for non-Debian platforms. Installing Dependencies Cedar Backup requires a number of external packages in order to function properly. Before installing Cedar Backup, you must make sure that these dependencies are met. Cedar Backup is written in Python 2 and requires version 2.7 or greater of the language. Python 2.7 was originally released on 4 Jul 2010, and is the last supported release of Python 2. As of this writing, all current Linux and BSD distributions include it. You must install Python 2 on every peer node in a pool (master or client). Additionally, remote client peer nodes must be running an RSH-compatible server, such as the ssh server, and master nodes must have an RSH-compatible client installed if they need to connect to remote peer machines. Master machines also require several other system utilities, most having to do with writing and validating CD/DVD media. On master machines, you must make sure that these utilities are available if you want to to run the store action: * mkisofs * eject * mount * unmount * volname Then, you need this utility if you are writing CD media: * cdrecord or these utilities if you are writing DVD media: * growisofs All of these utilities are common and are easy to find for almost any UNIX-like operating system. Installing the Source Package Python source packages are fairly easy to install. They are distributed as .tar.gz files which contain Python source code, a manifest and an installation script called setup.py. Once you have downloaded the source package from the Cedar Solutions website, ^ [15] untar it: $ zcat CedarBackup2-2.0.0.tar.gz | tar xvf - This will create a directory called (in this case) CedarBackup2-2.0.0. The version number in the directory will always match the version number in the filename. If you have root access and want to install the package to the ?standard? Python location on your system, then you can install the package in two simple steps: $ cd CedarBackup2-2.0.0 $ python setup.py install Make sure that you are using Python 2.7 or better to execute setup.py. You may also wish to run the unit tests before actually installing anything. Run them like so: python util/test.py If any unit test reports a failure on your system, please email me the output from the unit test, so I can fix the problem. ^[17] This is particularly important for non-Linux platforms where I do not have a test system available to me. Some users might want to choose a different install location or change other install parameters. To get more information about how setup.py works, use the --help option: $ python setup.py --help $ python setup.py install --help In any case, once the package has been installed, you can proceed to configuration as described in Chapter5, Configuration. ------------------------------------------------------------------------------- ^[15] See http://cedar-solutions.com/debian.html ^[16] See http://docs.python.org/lib/module-distutils.html . ^[17] Chapter4.Command Line Tools Table of Contents Overview The cback command Introduction Syntax Switches Actions The cback-amazons3-sync command Introduction Syntax Switches The cback-span command Introduction Syntax Switches Using cback-span Sample run Overview Cedar Backup comes with three command-line programs: cback, cback-amazons3-sync , and cback-span. The cback command is the primary command line interface and the only Cedar Backup program that most users will ever need. The cback-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process. Users who have a lot of data to back up ? more than will fit on a single CD or DVD ? can use the interactive cback-span tool to split their data between multiple discs. The cback command Introduction Cedar Backup's primary command-line interface is the cback command. It controls the entire backup process. Syntax The cback command has the following syntax: Usage: cback [switches] action(s) The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -c, --config Path to config file (default: /etc/cback.conf) -f, --full Perform a full backup, regardless of configuration -M, --managed Include managed clients when executing actions -N, --managed-only Include ONLY managed clients when executing actions -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit The following actions may be specified: all Take all normal actions (collect, stage, store, purge) collect Take the collect action stage Take the stage action store Take the store action purge Take the purge action rebuild Rebuild "this week's" disc if possible validate Validate configuration only initialize Initialize media for use with Cedar Backup You may also specify extended actions that have been defined in configuration. You must specify at least one action to take. More than one of the "collect", "stage", "store" or "purge" actions and/or extended actions may be specified in any arbitrary order; they will be executed in a sensible order. The "all", "rebuild", "validate", and "initialize" actions may not be combined with other actions. Note that the all action only executes the standard four actions. It never executes any of the configured extensions. ^[18] Switches -h, --help Display usage/help listing. -V, --version Display version information. -b, --verbose Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. -q, --quiet Run quietly (display no output to the screen). -c, --config Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. -f, --full Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started. -M, --managed Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally. -N, --managed-only Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client ? but do not execute the action locally. -l, --logfile Specify the path to an alternate logfile. The default logfile file is /var/ log/cback.log. -o, --owner Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. -m, --mode Specify the permissions for the logfile, using the numeric mode as in chmod (1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. -O, --output Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. -d, --debug Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well. -s, --stack Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. -D, --diagnostics Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. Actions You can find more information about the various actions in the section called ?The Backup Process? (in Chapter2, Basic Concepts). In general, you may specify any combination of the collect, stage, store or purge actions, and the specified actions will be executed in a sensible order. Or, you can specify one of the all, rebuild, validate, or initialize actions (but these actions may not be combined with other actions). If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. The all action never executes extended actions, however. The cback-amazons3-sync command Introduction The cback-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process. This might be a good option for some types of data, as long as you understand the limitations around retrieving previous versions of objects that get modified or deleted as part of a sync. S3 does support versioning, but it won't be quite as easy to get at those previous versions as with an explicit incremental backup like cback provides. Cedar Backup does not provide any tooling that would help you retrieve previous versions. The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The aws command will be executed as the same user that is executing the cback-amazons3-sync command, so make sure you configure it as the proper user. (This is different than the amazons3 extension, which is designed to execute as root and switches over to the configured backup user to execute AWS CLI commands.) Syntax The cback-amazons3-sync command has the following syntax: Usage: cback-amazons3-sync [switches] sourceDir s3bucketUrl Cedar Backup Amazon S3 sync tool. This Cedar Backup utility synchronizes a local directory to an Amazon S3 bucket. After the sync is complete, a validation step is taken. An error is reported if the contents of the bucket do not match the source directory, or if the indicated size for any file differs. This tool is a wrapper over the AWS CLI command-line tool. The following arguments are required: sourceDir The local source directory on disk (must exist) s3BucketUrl The URL to the target Amazon S3 bucket The following switches are accepted: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -q, --quiet Run quietly (display no output to the screen) -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. aws) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump Python stack trace instead of swallowing exceptions -D, --diagnostics Print runtime diagnostics to the screen and exit -v, --verifyOnly Only verify the S3 bucket contents, do not make changes -w, --ignoreWarnings Ignore warnings about problematic filename encodings Typical usage would be something like: cback-amazons3-sync /home/myuser s3://example.com-backup/myuser This will sync the contents of /home/myuser into the indicated bucket. Switches -h, --help Display usage/help listing. -V, --version Display version information. -b, --verbose Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. -q, --quiet Run quietly (display no output to the screen). -l, --logfile Specify the path to an alternate logfile. The default logfile file is /var/ log/cback.log. -o, --owner Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback-amazons3-sync command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. -m, --mode Specify the permissions for the logfile, using the numeric mode as in chmod (1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback-amazons3-sync command is executed, it will retain its existing ownership and mode. -O, --output Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. -d, --debug Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well. -s, --stack Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. -D, --diagnostics Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. -v, --verifyOnly Only verify the S3 bucket contents against the directory on disk. Do not make any changes to the S3 bucket or transfer any files. This is intended as a quick check to see whether the sync is up-to-date. Although no files are transferred, the tool will still execute the source filename encoding check, discussed below along with --ignoreWarnings. -w, --ignoreWarnings The AWS CLI S3 sync process is very picky about filename encoding. Files that the Linux filesystem handles with no problems can cause problems in S3 if the filename cannot be encoded properly in your configured locale. As of this writing, filenames like this will cause the sync process to abort without transferring all files as expected. To avoid confusion, the cback-amazons3-sync tries to guess which files in the source directory will cause problems, and refuses to execute the AWS CLI S3 sync if any problematic files exist. If you'd rather proceed anyway, use --ignoreWarnings. If problematic files are found, then you have basically two options: either correct your locale (i.e. if you have set LANG=C) or rename the file so it can be encoded properly in your locale. The error messages will tell you the expected encoding (from your locale) and the actual detected encoding for the filename. The cback-span command Introduction Cedar Backup was designed ? and is still primarily focused ? around weekly backups to a single CD or DVD. Most users who back up more data than fits on a single disc seem to stop their backup process at the stage step, using Cedar Backup as an easy way to collect data. However, some users have expressed a need to write these large kinds of backups to disc ? if not every day, then at least occassionally. The cback-span tool was written to meet those needs. If you have staged more data than fits on a single CD or DVD, you can use cback-span to split that data between multiple discs. cback-span is not a general-purpose disc-splitting tool. It is a specialized program that requires Cedar Backup configuration to run. All it can do is read Cedar Backup configuration, find any staging directories that have not yet been written to disc, and split the files in those directories between discs. cback-span accepts many of the same command-line options as cback, but must be run interactively. It cannot be run from cron. This is intentional. It is intended to be a useful tool, not a new part of the backup process (that is the purpose of an extension). In order to use cback-span, you must configure your backup such that the largest individual backup file can fit on a single disc. The command will not split a single file onto more than one disc. All it can do is split large directories onto multiple discs. Files in those directories will be arbitrarily split up so that space is utilized most efficiently. Syntax The cback-span command has the following syntax: Usage: cback-span [switches] Cedar Backup 'span' tool. This Cedar Backup utility spans staged data between multiple discs. It is a utility, not an extension, and requires user interaction. The following switches are accepted, mostly to set up underlying Cedar Backup functionality: -h, --help Display this usage/help listing -V, --version Display version information -b, --verbose Print verbose output as well as logging to disk -c, --config Path to config file (default: /etc/cback.conf) -l, --logfile Path to logfile (default: /var/log/cback.log) -o, --owner Logfile ownership, user:group (default: root:adm) -m, --mode Octal logfile permissions mode (default: 640) -O, --output Record some sub-command (i.e. cdrecord) output to the log -d, --debug Write debugging information to the log (implies --output) -s, --stack Dump a Python stack trace instead of swallowing exceptions Switches -h, --help Display usage/help listing. -V, --version Display version information. -b, --verbose Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. -c, --config Specify the path to an alternate configuration file. The default configuration file is /etc/cback.conf. -l, --logfile Specify the path to an alternate logfile. The default logfile file is /var/ log/cback.log. -o, --owner Specify the ownership of the logfile, in the form user:group. The default ownership is root:adm, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. -m, --mode Specify the permissions for the logfile, using the numeric mode as in chmod (1). The default mode is 0640 (-rw-r-----). This value will only be used when creating a new logfile. If the logfile already exists when the cback command is executed, it will retain its existing ownership and mode. -O, --output Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. Cedar Backup uses system commands mostly for dealing with the CD/DVD recorder and its media. -d, --debug Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the --output option, as well. -s, --stack Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just propagating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. Using cback-span As discussed above, the cback-span is an interactive command. It cannot be run from cron. You can typically use the default answer for most questions. The only two questions that you may not want the default answer for are the fit algorithm and the cushion percentage. The cushion percentage is used by cback-span to determine what capacity to shoot for when splitting up your staging directories. A 650 MB disc does not fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion percentage tells cback-span how much overhead to reserve for the filesystem. The default of 4% is usually OK, but if you have problems you may need to increase it slightly. The fit algorithm tells cback-span how it should determine which items should be placed on each disc. If you don't like the result from one algorithm, you can reject that solution and choose a different algorithm. The four available fit algorithms are: worst The worst-fit algorithm. The worst-fit algorithm proceeds through a sorted list of items (sorted from smallest to largest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the maximum number of items possible in its search for optimal capacity utilization. It tends to be somewhat slower than either the best-fit or alternate-fit algorithm, probably because on average it has to look at more items before completing. best The best-fit algorithm. The best-fit algorithm proceeds through a sorted list of items (sorted from largest to smallest) until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. The algorithm effectively includes the minimum number of items possible in its search for optimal capacity utilization. For large lists of mixed-size items, it's not unusual to see the algorithm achieve 100% capacity utilization by including fewer than 1% of the items. Probably because it often has to look at fewer of the items before completing, it tends to be a little faster than the worst-fit or alternate-fit algorithms. first The first-fit algorithm. The first-fit algorithm proceeds through an unsorted list of items until running out of items or meeting capacity exactly. If capacity is exceeded, the item that caused capacity to be exceeded is thrown away and the next one is tried. This algorithm generally performs more poorly than the other algorithms both in terms of capacity utilization and item utilization, but can be as much as an order of magnitude faster on large lists of items because it doesn't require any sorting. alternate A hybrid algorithm that I call alternate-fit. This algorithm tries to balance small and large items to achieve better end-of-disk performance. Instead of just working one direction through a list, it alternately works from the start and end of a sorted list (sorted from smallest to largest), throwing away any item which causes capacity to be exceeded. The algorithm tends to be slower than the best-fit and first-fit algorithms, and slightly faster than the worst-fit algorithm, probably because of the number of items it considers on average before completing. It often achieves slightly better capacity utilization than the worst-fit algorithm, while including slightly fewer items. Sample run Below is a log showing a sample cback-span run. ================================================ Cedar Backup 'span' tool ================================================ This the Cedar Backup span tool. It is used to split up staging data when that staging data does not fit onto a single disc. This utility operates using Cedar Backup configuration. Configuration specifies which staging directory to look at and which writer device and media type to use. Continue? [Y/n]: === Cedar Backup store configuration looks like this: Source Directory...: /tmp/staging Media Type.........: cdrw-74 Device Type........: cdwriter Device Path........: /dev/cdrom Device SCSI ID.....: None Drive Speed........: None Check Data Flag....: True No Eject Flag......: False Is this OK? [Y/n]: === Please wait, indexing the source directory (this may take a while)... === The following daily staging directories have not yet been written to disc: /tmp/staging/2007/02/07 /tmp/staging/2007/02/08 /tmp/staging/2007/02/09 /tmp/staging/2007/02/10 /tmp/staging/2007/02/11 /tmp/staging/2007/02/12 /tmp/staging/2007/02/13 /tmp/staging/2007/02/14 The total size of the data in these directories is 1.00 GB. Continue? [Y/n]: === Based on configuration, the capacity of your media is 650.00 MB. Since estimates are not perfect and there is some uncertainly in media capacity calculations, it is good to have a "cushion", a percentage of capacity to set aside. The cushion reduces the capacity of your media, so a 1.5% cushion leaves 98.5% remaining. What cushion percentage? [4.00]: === The real capacity, taking into account the 4.00% cushion, is 627.25 MB. It will take at least 2 disc(s) to store your 1.00 GB of data. Continue? [Y/n]: === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: === Please wait, generating file lists (this may take a while)... === Using the "worst-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 246 files, 615.97 MB, 98.20% utilization Disc 2: 8 files, 412.96 MB, 65.84% utilization Accept this solution? [Y/n]: n === Which algorithm do you want to use to span your data across multiple discs? The following algorithms are available: first....: The "first-fit" algorithm best.....: The "best-fit" algorithm worst....: The "worst-fit" algorithm alternate: The "alternate-fit" algorithm If you don't like the results you will have a chance to try a different one later. Which algorithm? [worst]: alternate === Please wait, generating file lists (this may take a while)... === Using the "alternate-fit" algorithm, Cedar Backup can split your data into 2 discs. Disc 1: 73 files, 627.25 MB, 100.00% utilization Disc 2: 181 files, 401.68 MB, 64.04% utilization Accept this solution? [Y/n]: y === Please place the first disc in your backup device. Press return when ready. === Initializing image... Writing image to disc... ------------------------------------------------------------------------------- ^[18] Some users find this surprising, because extensions are configured with sequence numbers. I did it this way because I felt that running extensions as part of the all action would sometimes result in ?surprising? behavior. Better to be definitive than confusing. Chapter5.Configuration Table of Contents Overview Configuration File Format Sample Configuration File Reference Configuration Options Configuration Peers Configuration Collect Configuration Stage Configuration Store Configuration Purge Configuration Extensions Configuration Setting up a Pool of One Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Client Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure the master in your backup pool. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test your backup. Step 9: Modify the backup cron jobs. Setting up a Master Peer Node Step 1: Decide when you will run your backup. Step 2: Make sure email works. Step 3: Configure your writer device. Step 4: Configure your backup user. Step 5: Create your backup tree. Step 6: Create the Cedar Backup configuration file. Step 7: Validate the Cedar Backup configuration file. Step 8: Test connectivity to client machines. Step 9: Test your backup. Step 10: Modify the backup cron jobs. Configuring your Writer Device Device Types Devices identified by by device name Devices identified by SCSI id Linux Notes Finding your Linux CD Writer Mac OS X Notes Optimized Blanking Stategy Overview Configuring Cedar Backup is unfortunately somewhat complicated. The good news is that once you get through the initial configuration process, you'll hardly ever have to change anything. Even better, the most typical changes (i.e. adding and removing directories from a backup) are easy. First, familiarize yourself with the concepts in Chapter2, Basic Concepts. In particular, be sure that you understand the differences between a master and a client. (If you only have one machine, then your machine will act as both a master and a client, and we'll refer to your setup as a pool of one.) Then, install Cedar Backup per the instructions in Chapter3, Installation. Once everything has been installed, you are ready to begin configuring Cedar Backup. Look over the section called ?The cback command? (in Chapter4, Command Line Tools) to become familiar with the command line interface. Then, look over the section called ?Configuration File Format? (below) and create a configuration file for each peer in your backup pool. To start with, create a very simple configuration file, then expand it later. Decide now whether you will store the configuration file in the standard place (/etc/cback.conf) or in some other location. After you have all of the configuration files in place, configure each of your machines, following the instructions in the appropriate section below (for master, client or pool of one). Since the master and client(s) must communicate over the network, you won't be able to fully configure the master without configuring each client and vice-versa. The instructions are clear on what needs to be done. Which Platform? Cedar Backup has been designed for use on all UNIX-like systems. However, since it was developed on a Debian GNU/Linux system, and because I am a Debian developer, the packaging is prettier and the setup is somewhat simpler on a Debian system than on a system where you install from source. The configuration instructions below have been generalized so they should work well regardless of what platform you are running (i.e. RedHat, Gentoo, FreeBSD, etc.). If instructions vary for a particular platform, you will find a note related to that platform. I am always open to adding more platform-specific hints and notes, so write me if you find problems with these instructions. Configuration File Format Cedar Backup is configured through an XML ^[19] configuration file, usually called /etc/cback.conf. The configuration file contains the following sections: reference, options, collect, stage, store, purge and extensions. All configuration files must contain the two general configuration sections, the reference section and the options section. Besides that, administrators need only configure actions they intend to use. For instance, on a client machine, administrators will generally only configure the collect and purge sections, while on a master machine they will have to configure all four action-related sections. ^[20] The extensions section is always optional and can be omitted unless extensions are in use. Note Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar Backup configuration is generally case-sensitive on that platform, just like on all other platforms. For instance, even though the files ?Ken? and ?ken? might be the same on the Mac OS X filesystem, an exclusion in Cedar Backup configuration for ?ken? will only match the file if it is actually on the filesystem with a lower-case ?k? as its first letter. This won't surprise the typical UNIX user, but might surprise someone who's gotten into the ?Mac Mindset?. Sample Configuration File Both the Python source distribution and the Debian package come with a sample configuration file. The Debian package includes its sample in /usr/share/doc/ cedar-backup2/examples/cback.conf.sample. This is a sample configuration file similar to the one provided in the source package. Documentation below provides more information about each of the individual configuration sections. Kenneth J. Pronovici 1.3 Sample tuesday /opt/backup/tmp backup group /usr/bin/scp -B debian local /opt/backup/collect /opt/backup/collect daily targz .cbignore /etc incr /home/root/.profile weekly /opt/backup/staging /opt/backup/staging cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y Y Y /opt/backup/stage 7 /opt/backup/collect 0 Reference Configuration The reference configuration section contains free-text elements that exist only for reference.. The section itself is required, but the individual elements may be left blank if desired. This is an example reference configuration section: Kenneth J. Pronovici Revision 1.3 Sample Yet to be Written Config Tool (tm) The following elements are part of the reference configuration section: author Author of the configuration file. Restrictions: None revision Revision of the configuration file. Restrictions: None description Description of the configuration file. Restrictions: None generator Tool that generated the configuration file, if any. Restrictions: None Options Configuration The options configuration section contains configuration options that are not specific to any one action. This is an example options configuration section: tuesday /opt/backup/tmp backup backup /usr/bin/scp -B /usr/bin/ssh /usr/bin/cback collect, purge cdrecord /opt/local/bin/cdrecord mkisofs /opt/local/bin/mkisofs collect echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT" collect echo "I AM A POST-ACTION HOOK RELATED TO COLLECT" The following elements are part of the options configuration section: starting_day Day that starts the week. Cedar Backup is built around the idea of weekly backups. The starting day of week is the day that media will be rebuilt from scratch and that incremental backup information will be cleared. Restrictions: Must be a day of the week in English, i.e. monday, tuesday, etc. The validation is case-sensitive. working_dir Working (temporary) directory to use for backups. This directory is used for writing temporary files, such as tar file or ISO filesystem images as they are being built. It is also used to store day-to-day information about incremental backups. The working directory should contain enough free space to hold temporary tar files (on a client) or to build an ISO filesystem image (on a master). Restrictions: Must be an absolute path backup_user Effective user that backups should run as. This user must exist on the machine which is being configured and should not be root (although that restriction is not enforced). This value is also used as the default remote backup user for remote peers. Restrictions: Must be non-empty backup_group Effective group that backups should run as. This group must exist on the machine which is being configured, and should not be root or some other ?powerful? group (although that restriction is not enforced). Restrictions: Must be non-empty rcp_command Default rcp-compatible copy command for staging. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This value is used as the default value for all remote peers. Technically, this value is not needed by clients, but we require it for all config files anyway. Restrictions: Must be non-empty rsh_command Default rsh-compatible command to use for remote shells. The rsh command should be the exact command used for remote shells, including any required options. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty cback_command Default cback-compatible command to use on managed remote clients. The cback command should be the exact command used for for executing cback on a remote managed client, including any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Default set of actions that are managed on remote clients. This is a comma-separated list of actions that the master will manage on behalf of remote clients. Typically, it would include only collect-like actions and purge. This value is used as the default value for all managed clients. It is optional, because it is only used when executing actions on managed clients. However, each managed client must either be able to read the value from options configuration or must set the value explicitly. Restrictions: Must be non-empty. override Command to override with a customized path. This is a subsection which contains a command to override with a customized path. This functionality would be used if root's $PATH does not include a particular required command, or if there is a need to use a version of a command that is different than the one listed on the $PATH. Most users will only use this section when directed to, in order to fix a problem. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: command Name of the command to be overridden, i.e. ?cdrecord?. Restrictions: Must be a non-empty string. abs_path The absolute path where the overridden command can be found. Restrictions: Must be an absolute path. pre_action_hook Hook configuring a command to be executed before an action. This is a subsection which configures a command to be executed immediately before a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. post_action_hook Hook configuring a command to be executed after an action. This is a subsection which configures a command to be executed immediately after a named action. It provides a way for administrators to associate their own custom functionality with standard Cedar Backup actions or with arbitrary extensions. This section is optional, and can be repeated as many times as necessary. This subsection must contain the following two fields: action Name of the Cedar Backup action that the hook is associated with. The action can be a standard backup action (collect, stage, etc.) or can be an extension action. No validation is done to ensure that the configured action actually exists. Restrictions: Must be a non-empty string. command Name of the command to be executed. This item can either specify the path to a shell script of some sort (the recommended approach) or can include a complete shell command. Note: if you choose to provide a complete shell command rather than the path to a script, you need to be aware of some limitations of Cedar Backup's command-line parser. You cannot use a subshell (via the `command` or $(command) syntaxes) or any shell variable in your command line. Additionally, the command-line parser only recognizes the double-quote character (") to delimit groupings or strings on the command-line. The bottom line is, you are probably best off writing a shell script of some sort for anything more sophisticated than very simple shell commands. Restrictions: Must be a non-empty string. Peers Configuration The peers configuration section contains a list of the peers managed by a master. This section is only required on a master. This is an example peers configuration section: machine1 local /opt/backup/collect machine2 remote backup /opt/backup/collect all machine3 remote Y backup /opt/backup/collect /usr/bin/scp /usr/bin/ssh /usr/bin/cback collect, purge The following elements are part of the peers configuration section: peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer managed by a master. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether ?not ready to be staged? errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer managed by a master. A remote peer is one which can be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. managed Indicates whether this peer is managed. A managed peer (or managed client) is a peer for which the master manages all of the backup activites via a remote shell. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. ignore_failures Ignore failure mode for this peer The ignore failure mode indicates whether ?not ready to be staged? errors should be ignored for this peer. This option is intended to be used for peers that are up only intermittently, to cut down on the number of error emails received by the Cedar Backup administrator. The "none" mode means that all errors will be reported. This is the default behavior. The "all" mode means to ignore all failures. The "weekly" mode means to ignore failures for a start-of-week or full backup. The "daily" mode means to ignore failures for any backup that is not either a full backup or a start-of-week backup. Restrictions: If set, must be one of "none", "all", "daily", or "weekly". backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. rsh_command The rsh-compatible command for this peer. The rsh command should be the exact command used for remote shells, including any required options. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default rsh command from the options section. Restrictions: Must be non-empty cback_command The cback-compatible command for this peer. The cback command should be the exact command used for for executing cback on the peer as part of a managed backup. This value must include any required command-line options. Do not list any actions in the command line, and do not include the --full command-line option. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default cback command from the options section. Note: if this command-line is complicated, it is often better to create a simple shell script on the remote host to encapsulate all of the options. Then, just reference the shell script in configuration. Restrictions: Must be non-empty managed_actions Set of actions that are managed for this peer. This is a comma-separated list of actions that the master will manage on behalf this peer. Typically, it would include only collect-like actions and purge. This value only applies if the peer is managed. This field is optional. if it doesn't exist, the backup will use the default list of managed actions from the options section. Restrictions: Must be non-empty. Collect Configuration The collect configuration section contains configuration options related the the collect action. This section contains a variable number of elements, including an optional exclusion section and a repeating subsection used to specify which directories and/or files to collect. You can also configure an ignore indicator file, which lets users mark their own directories as not backed up. Using a Link Farm Sometimes, it's not very convenient to list directories one by one in the Cedar Backup configuration file. For instance, when backing up your home directory, you often exclude as many directories as you include. The ignore file mechanism can be of some help, but it still isn't very convenient if there are a lot of directories to ignore (or if new directories pop up all of the time). In this situation, one option is to use a link farm rather than listing all of the directories in configuration. A link farm is a directory that contains nothing but a set of soft links to other files and directories. Normally, Cedar Backup does not follow soft links, but you can override this behavior for individual directories using the link_depth and dereference options (see below). When using a link farm, you still have to deal with each backed-up directory individually, but you don't have to modify configuration. Some users find that this works better for them. In order to actually execute the collect action, you must have configured at least one collect directory or one collect file. However, if you are only including collect configuration for use by an extension, then it's OK to leave out these sections. The validation will take place only when the collect action is executed. This is an example collect configuration section: /opt/backup/collect daily targz .cbignore /etc .*\.conf /home/root/.profile /etc /var/log incr /opt weekly /opt/large backup .*tmp The following elements are part of the collect configuration section: collect_dir Directory to collect files into. On a client, this is the directory which tarfiles for individual collect directories are written into. The master then stages files from this directory into its own staging directory. This field is always required. It must contain enough free space to collect all of the backed-up files on the machine in a compressed form. Restrictions: Must be an absolute path collect_mode Default collect mode. The collect mode describes how frequently a directory is backed up. See the section called ?The Collect Action? (in Chapter2, Basic Concepts) for more information. This value is the collect mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Default archive mode for collect files. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This value is the archive mode that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Default ignore file name. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This value is the ignore file name that will be used by default during the collect process. Individual collect directories (below) may override this value. If all individual directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be non-empty recursion_level Recursion level to use when collecting directories. This is an integer value that Cedar Backup will consider when generating archive files for a configured collect directory. Normally, Cedar Backup generates one archive file per collect directory. So, if you collect /etc you get etc.tar.gz. Most of the time, this is what you want. However, you may sometimes wish to generate multiple archive files for a single collect directory. The most obvious example is for /home. By default, Cedar Backup will generate home.tar.gz. If instead, you want one archive file per home directory you can set a recursion level of 1. Cedar Backup will generate home-user1.tar.gz, home-user2.tar.gz, etc. Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if the configured recursion level is deeper than the directory tree that is being collected. You can use a negative recursion level (like -1) to specify an infinite level of recursion. This will exhaust the tree in the same way as if the recursion level is set too high. This field is optional. if it doesn't exist, the backup will use the default recursion level of zero. Restrictions: Must be an integer. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of absolute paths and patterns to be excluded across all configured directories. For a given directory, the set of absolute paths and patterns to exclude is built from this list and any list that exists on the directory itself. Directories cannot override or remove entries that are in this list, however. This section is optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. pattern A pattern to be recursively excluded from the backup. The pattern must be a Python regular expression. ^[21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty file A file to be collected. This is a subsection which contains information about a specific file to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect file subsection contains the following fields: abs_path Absolute path of the file to collect. Restrictions: Must be an absolute path. collect_mode Collect mode for this file The collect mode describes how frequently a file is backed up. See the section called ?The Collect Action? (in Chapter2, Basic Concepts) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this file. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. dir A directory to be collected. This is a subsection which contains information about a specific directory to be collected (backed up). This section can be repeated as many times as is necessary. At least one collect directory or collect file must be configured when the collect action is executed. The collect directory subsection contains the following fields: abs_path Absolute path of the directory to collect. The path may be either a directory, a soft link to a directory, or a hard link to a directory. All three are treated the same at this level. The contents of the directory will be recursively collected. The backup will contain all of the files in the directory, as well as the contents of all of the subdirectories within the directory, etc. Soft links within the directory are treated as files, i.e. they are copied verbatim (as a link) and their contents are not backed up. Restrictions: Must be an absolute path. collect_mode Collect mode for this directory The collect mode describes how frequently a directory is backed up. See the section called ?The Collect Action? (in Chapter2, Basic Concepts) for more information. This field is optional. If it doesn't exist, the backup will use the default collect mode. Note: if your backup device does not suppport multisession discs, then you should probably confine yourself to the daily collect mode, to avoid losing data. Restrictions: Must be one of daily, weekly or incr. archive_mode Archive mode for this directory. The archive mode maps to the way that a backup file is stored. A value tar means just a tarfile (file.tar); a value targz means a gzipped tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2) This field is optional. if it doesn't exist, the backup will use the default archive mode. Restrictions: Must be one of tar, targz or tarbz2. ignore_file Ignore file name for this directory. The ignore file is an indicator file. If it exists in a given directory, then that directory will be recursively excluded from the backup as if it were explicitly excluded in configuration. The ignore file provides a way for individual users (who might not have access to Cedar Backup configuration) to control which of their own directories get backed up. For instance, users with a ~/tmp directory might not want it backed up. If they create an ignore file in their directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it. This field is optional. If it doesn't exist, the backup will use the default ignore file name. Restrictions: Must be non-empty link_depth Link depth value to use for this directory. The link depth is maximum depth of the tree at which soft links should be followed. So, a depth of 0 does not follow any soft links within the collect directory, a depth of 1 follows only links immediately within the collect directory, a depth of 2 follows the links at the next level down, etc. This field is optional. If it doesn't exist, the backup will assume a value of zero, meaning that soft links within the collect directory will never be followed. Restrictions: If set, must be an integer ? 0. dereference Whether to dereference soft links. If this flag is set, links that are being followed will be dereferenced before being added to the backup. The link will be added (as a link), and then the directory or file that the link points at will be added as well. This value only applies to a directory where soft links are being followed (per the link_depth configuration option). It never applies to a configured collect directory itself, only to other directories within the collect directory. This field is optional. If it doesn't exist, the backup will assume that links should never be dereferenced. Restrictions: Must be a boolean (Y or N). exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this collect directory. This list is combined with the program-wide list to build a complete list for the directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: abs_path An absolute path to be recursively excluded from the backup. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value /var/log/apache would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be an absolute path. rel_path A relative path to be recursively excluded from the backup. The path is assumed to be relative to the collect directory itself. For instance, if the configured directory is /opt/web a configured relative path of something/else would exclude the path /opt/web/ something/else. If a directory is excluded, then all of its children are also recursively excluded. For instance, a value something/else would exclude any files within something/else as well as files within other directories under something/else. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. ^[21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). If the pattern causes a directory to be excluded, then all of the children of that directory are also recursively excluded. For instance, a value .*apache.* might match the /var/log/apache directory. This would exclude any files within /var/log/apache as well as files within other directories under /var/log/apache. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Stage Configuration The stage configuration section contains configuration options related the the stage action. The section indicates where date from peers can be staged to. This section can also (optionally) override the list of peers so that not all peers are staged. If you provide any peers in this section, then the list of peers here completely replaces the list of peers in the peers configuration section for the purposes of staging. This is an example stage configuration section for the simple case where the list of peers is taken from peers configuration: /opt/backup/stage This is an example stage configuration section that overrides the default list of peers: /opt/backup/stage machine1 local /opt/backup/collect machine2 remote backup /opt/backup/collect The following elements are part of the stage configuration section: staging_dir Directory to stage files into. This is the directory into which the master stages collected data from each of the clients. Within the staging directory, data is staged into date-based directories by peer name. For instance, peer ?daystrom? backed up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom relative to the staging directory itself. This field is always required. The directory must contain enough free space to stage all of the files collected from all of the various machines in a backup pool. Many administrators set up purging to keep staging directories around for a week or more, which requires even more space. Restrictions: Must be an absolute path peer (local version) Local client peer in a backup pool. This is a subsection which contains information about a specific local client peer to be staged (backed up). A local peer is one whose collect directory can be reached without requiring any rsh-based network calls. It is possible that a remote peer might be staged as a local peer if its collect directory is mounted to the master via NFS, AFS or some other method. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The local peer subsection must contain the following fields: name Name of the peer, typically a valid hostname. For local peers, this value is only used for reference. However, it is good practice to list the peer's hostname here, for consistency with remote peers. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a local peer, it must always be local. Restrictions: Must be local. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a local peer, the directory is assumed to be reachable via normal filesystem operations (i.e. cp). Restrictions: Must be an absolute path. peer (remote version) Remote client peer in a backup pool. This is a subsection which contains information about a specific remote client peer to be staged (backed up). A remote peer is one whose collect directory can only be reached via an rsh-based network call. This section can be repeated as many times as is necessary. At least one remote or local peer must be configured. Remember, if you provide any local or remote peer in staging configuration, the global peer configuration is completely replaced by the staging peer configuration. The remote peer subsection must contain the following fields: name Hostname of the peer. For remote peers, this must be a valid DNS hostname or IP address which can be resolved during an rsh-based network call. Restrictions: Must be non-empty, and unique among all peers. type Type of this peer. This value identifies the type of the peer. For a remote peer, it must always be remote. Restrictions: Must be remote. collect_dir Collect directory to stage from for this peer. The master will copy all files in this directory into the appropriate staging directory. Since this is a remote peer, the directory is assumed to be reachable via rsh-based network operations (i.e. scp or the configured rcp command). Restrictions: Must be an absolute path. backup_user Name of backup user on the remote peer. This username will be used when copying files from the remote peer via an rsh-based network connection. This field is optional. if it doesn't exist, the backup will use the default backup user from the options section. Restrictions: Must be non-empty. rcp_command The rcp-compatible copy command for this peer. The rcp command should be the exact command used for remote copies, including any required options. If you are using scp, you should pass it the -B option, so scp will not ask for any user input (which could hang the backup). A common example is something like /usr/bin/scp -B. This field is optional. if it doesn't exist, the backup will use the default rcp command from the options section. Restrictions: Must be non-empty. Store Configuration The store configuration section contains configuration options related the the store action. This section contains several optional fields. Most fields control the way media is written using the writer device. This is an example store configuration section: /opt/backup/stage cdrw-74 cdwriter /dev/cdrw 0,0,0 4 Y Y Y N 15 2 weekly 1.3 The following elements are part of the store configuration section: source_dir Directory whose contents should be written to media. This directory must be a Cedar Backup staging directory, as configured in the staging configuration section. Only certain data from that directory (typically, data from the current day) will be written to disc. Restrictions: Must be an absolute path device_type Type of the device used to write the media. This field controls which type of writer device will be used by Cedar Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD writers (dvdwriter). This field is optional. If it doesn't exist, the cdwriter device type is assumed. Restrictions: If set, must be either cdwriter or dvdwriter. media_type Type of the media in the device. Unless you want to throw away a backup disc every week, you are probably best off using rewritable media. You must choose a media type that is appropriate for the device type you chose above. For more information on media types, see the section called ?Media and Device Types? (in Chapter2, Basic Concepts). Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter. target_device Filesystem device name for writer device. This value is required for both CD writers and DVD writers. This is the UNIX device name for the writer drive, for instance /dev/scd0 or a symlink like /dev/cdrw. In some cases, this device name is used to directly write to media. This is true all of the time for DVD writers, and is true for CD writers when a SCSI id (see below) has not been specified. Besides this, the device name is also needed in order to do several pre-write checks (such as whether the device might already be mounted) as well as the post-write consistency check, if enabled. Note: some users have reported intermittent problems when using a symlink as the target device on Linux, especially with DVD media. If you experience problems, try using the real device name rather than the symlink. Restrictions: Must be an absolute path. target_scsi_id SCSI id for the writer device. This value is optional for CD writers and is ignored for DVD writers. If you have configured your CD writer hardware to work through the normal filesystem device path, then you can leave this parameter unset. Cedar Backup will just use the target device (above) when talking to cdrecord. Otherwise, if you have SCSI CD writer hardware or you have configured your non-SCSI hardware to operate like a SCSI device, then you need to provide Cedar Backup with a SCSI id it can use when talking with cdrecord. For the purposes of Cedar Backup, a valid SCSI identifier must either be in the standard SCSI identifier form scsibus,target,lun or in the specialized-method form :scsibus,target,lun. An example of a standard SCSI identifier is 1,6,2. Today, the two most common examples of the specialized-method form are ATA:scsibus,target,lun and ATAPI:scsibus,target,lun, but you may occassionally see other values (like OLDATAPI in some forks of cdrecord). See the section called ?Configuring your Writer Device? for more information on writer devices and how they are configured. Restrictions: If set, must be a valid SCSI identifier. drive_speed Speed of the drive, i.e. 2 for a 2x device. This field is optional. If it doesn't exist, the underlying device-related functionality will use the default drive speed. For DVD writers, it is best to leave this value unset, so growisofs can pick an appropriate speed. For CD writers, since media can be speed-sensitive, it is probably best to set a sensible value based on your specific writer and media. Restrictions: If set, must be an integer ? 1. check_data Whether the media should be validated. This field indicates whether a resulting image on the media should be validated after the write completes, by running a consistency check against it. If this check is enabled, the contents of the staging directory are directly compared to the media, and an error is reported if there is a mismatch. Practice shows that some drives can encounter an error when writing a multisession disc, but not report any problems. This consistency check allows us to catch the problem. By default, the consistency check is disabled, but most users should choose to enable it unless they have a good reason not to. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). check_media Whether the media should be checked before writing to it. By default, Cedar Backup does not check its media before writing to it. It will write to any media in the backup device. If you set this flag to Y, Cedar Backup will make sure that the media has been initialized before writing to it. (Rewritable media is initialized using the initialize action.) If the configured media is not rewritable (like CD-R), then this behavior is modified slightly. For this kind of media, the check passes either if the media has been initialized or if the media appears unused. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the store operation has to cross a midnite boundary in order to find data to write to disc. For instance, a warning would be generated if valid store data was only found in the day before or day after the current day. Configuration for some users is such that the store operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something ?strange? might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). no_eject Indicates that the writer device should not be ejected. Under some circumstances, Cedar Backup ejects (opens and closes) the writer device. This is done because some writer devices need to re-load the media before noticing a media state change (like a new session). For most writer devices this is safe, because they have a tray that can be opened and closed. If your writer device does not have a tray and Cedar Backup does not properly detect this, then set this flag. Cedar Backup will not ever issue an eject command to your writer. Note: this could cause problems with your backup. For instance, with many writers, the check data step may fail if the media is not reloaded first. If this happens to you, you may need to get a different writer device. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). refresh_media_delay Number of seconds to delay after refreshing media This field is optional. If it doesn't exist, no delay will occur. Some devices seem to take a little while to stablize after refreshing the media (i.e. closing and opening the tray). During this period, operations on the media may fail. If your device behaves like this, you can try setting a delay of 10-15 seconds. Restrictions: If set, must be an integer ? 1. eject_delay Number of seconds to delay after ejecting the tray This field is optional. If it doesn't exist, no delay will occur. If your system seems to have problems opening and closing the tray, one possibility is that the open/close sequence is happening too quickly ? either the tray isn't fully open when Cedar Backup tries to close it, or it doesn't report being open. To work around that problem, set an eject delay of a few seconds. Restrictions: If set, must be an integer ? 1. blank_behavior Optimized blanking strategy. For more information about Cedar Backup's optimized blanking strategy, see the section called ?Optimized Blanking Stategy?. This entire configuration section is optional. However, if you choose to provide it, you must configure both a blanking mode and a blanking factor. blank_mode Blanking mode. Restrictions:Must be one of "daily" or "weekly". blank_factor Blanking factor. Restrictions:Must be a floating point number ? 0. Purge Configuration The purge configuration section contains configuration options related the the purge action. This section contains a set of directories to be purged, along with information about the schedule at which they should be purged. Typically, Cedar Backup should be configured to purge collect directories daily (retain days of 0). If you are tight on space, staging directories can also be purged daily. However, if you have space to spare, you should consider purging about once per week. That way, if your backup media is damaged, you will be able to recreate the week's backup using the rebuild action. You should also purge the working directory periodically, once every few weeks or once per month. This way, if any unneeded files are left around, perhaps because a backup was interrupted or because configuration changed, they will eventually be removed. The working directory should not be purged any more frequently than once per week, otherwise you will risk destroying data used for incremental backups. This is an example purge configuration section: /opt/backup/stage 7 /opt/backup/collect 0 The following elements are part of the purge configuration section: dir A directory to purge within. This is a subsection which contains information about a specific directory to purge within. This section can be repeated as many times as is necessary. At least one purge directory must be configured. The purge directory subsection contains the following fields: abs_path Absolute path of the directory to purge within. The contents of the directory will be purged based on age. The purge will remove any files that were last modified more than ?retain days? days ago. Empty directories will also eventually be removed. The purge directory itself will never be removed. The path may be either a directory, a soft link to a directory, or a hard link to a directory. Soft links within the directory (if any) are treated as files. Restrictions: Must be an absolute path. retain_days Number of days to retain old files. Once it has been more than this many days since a file was last modified, it is a candidate for removal. Restrictions: Must be an integer ? 0. Extensions Configuration The extensions configuration section is used to configure third-party extensions to Cedar Backup. If you don't intend to use any extensions, or don't know what extensions are, then you can safely leave this section out of your configuration file. It is optional. Extensions configuration is used to specify ?extended actions? implemented by code external to Cedar Backup. An administrator can use this section to map command-line Cedar Backup actions to third-party extension functions. Each extended action has a name, which is mapped to a Python function within a particular module. Each action also has an index associated with it. This index is used to properly order execution when more than one action is specified on the command line. The standard actions have predefined indexes, and extended actions are interleaved into the normal order of execution using those indexes. The collect action has index 100, the stage index has action 200, the store action has index 300 and the purge action has index 400. Warning Extended actions should always be configured to run before the standard action they are associated with. This is because of the way indicator files are used in Cedar Backup. For instance, the staging process considers the collect action to be complete for a peer if the file cback.collect can be found in that peer's collect directory. If you were to run the standard collect action before your other collect-like actions, the indicator file would be written after the collect action completes but before all of the other actions even run. Because of this, there's a chance the stage process might back up the collect directory before the entire set of collect-like actions have completed ? and you would get no warning about this in your email! So, imagine that a third-party developer provided a Cedar Backup extension to back up a certain kind of database repository, and you wanted to map that extension to the ?database? command-line action. You have been told that this function is called ?foo.bar()?. You think of this backup as a ?collect? kind of action, so you want it to be performed immediately before the collect action. To configure this extension, you would list an action with a name ?database?, a module ?foo?, a function name ?bar? and an index of ?99?. This is how the hypothetical action would be configured: database foo bar 99 The following elements are part of the extensions configuration section: action This is a subsection that contains configuration related to a single extended action. This section can be repeated as many times as is necessary. The action subsection contains the following fields: name Name of the extended action. Restrictions: Must be a non-empty string consisting of only lower-case letters and digits. module Name of the Python module associated with the extension function. Restrictions: Must be a non-empty string and a valid Python identifier. function Name of the Python extension function within the module. Restrictions: Must be a non-empty string and a valid Python identifier. index Index of action, for execution ordering. Restrictions: Must be an integer ? 0. Setting up a Pool of One Cedar Backup has been designed primarily for situations where there is a single master and a set of other clients that the master interacts with. However, it will just as easily work for a single machine (a backup pool of one). Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. Tip This setup procedure discusses how to set up Cedar Backup in the ?normal case? for a pool of one. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Warning Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be ?confused? until the next week begins, or until you re-run the backup using the --full flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See the section called ?Configuring your Writer Device? for more information on writer devices and how they are configured. Note There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ?ready made? backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Note Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly three times as big as the amount of data that will be backed up on a nightly basis, to allow for the data to be collected, staged, and then placed into an ISO filesystem image on disk. (This is one disadvantage to using Cedar Backup in single-machine pools, but in this day of really large hard drives, it might not be an issue.) Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. Note You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my ?dumping ground? for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more ?standard? location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in the section called ?Configuration File Format? (above) create a configuration file for your machine. Since you are working with a pool of one, you must configure all four action-specific sections: collect, stage, store and purge. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option). Warning Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidential information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is ?opened? must be ?closed? appropriately. Step 8: Test your backup. Place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/ cback.log) for errors and also mount the CD/DVD disc to be sure it can be read. If Cedar Backup ever completes ?normally? but the disc that is created is not usable, please report this as a bug. ^[22] To be safe, always enable the consistency check option in the store configuration section. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, one way to configure the cron job is to add a line like this to your /etc/crontab file: 30 00 * * * root cback all Or, you can create an executable script containing just these lines and place that file in the /etc/cron.daily directory: #/bin/sh cback all You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. Note For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/ cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the ?Single machine (pool of one)? entry in the file, and change the line so that the backup goes off when you want it to. Setting up a Client Peer Node Cedar Backup has been designed to backup entire ?pools? of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a client is a little simpler than configuring a master. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or the user that receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. Note See AppendixD, Securing Password-less SSH Connections for some important notes on how to optionally further secure password-less SSH connections to your clients. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Warning Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be ?confused? until the next week begins, or until you re-run the backup using the --full flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure the master in your backup pool. You will not be able to complete the client configuration until at least step 3 of the master's configuration has been completed. In particular, you will need to know the master's public SSH identity to fully configure a client. To find the master's public SSH identity, log in as the backup user on the master and cat the public identity file ~/.ssh/id_rsa.pub: user@machine> cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69 uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a "ready made" backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Note Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). Finally, take the master's public SSH identity (which you found in step 2) and cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity value is pasted into the file all on one line, and that the authorized_keys file is owned by your backup user and has permissions 600. If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly as big as the amount of data that will be backed up on a nightly basis (more if you elect not to purge it all every night). You should create a collect directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. Note You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my ?dumping ground? for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more "standard" location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in the section called ?Configuration File Format? (above), create a configuration file for your machine. Since you are working with a client, you must configure all action-specific sections for the collect and purge actions. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option). Warning Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems. This command only validates configuration on the one client, not the master or any other clients in a pool. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is ?opened? must be ?closed? appropriately. Step 8: Test your backup. Use the command cback --full collect purge. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/cback.log) for errors. Step 9: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback collect 30 06 * * * root cback purge You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on the client so that the collect action completes before the master attempts to stage, and so that the purge action does not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. ^[23] Note For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/ cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the ?Client machine? entries in the file, and change the lines so that the backup goes off when you want it to. Setting up a Master Peer Node Cedar Backup has been designed to backup entire ?pools? of machines. In any given pool, there is one master and some number of clients. Most of the work takes place on the master, so configuring a master is somewhat more complicated than configuring a client. Backups are designed to take place over an RSH or SSH connection. Because RSH is generally considered insecure, you are encouraged to use SSH rather than RSH. This document will only describe how to configure Cedar Backup to use SSH; if you want to use RSH, you're on your own. Once you complete all of these configuration steps, your backups will run as scheduled out of cron. Any errors that occur will be reported in daily emails to your root user (or whichever other user receives root's email). If you don't receive any emails, then you know your backup worked. Note: all of these configuration steps should be run as the root user, unless otherwise indicated. Tip This setup procedure discusses how to set up Cedar Backup in the ?normal case? for a master. If you would like to modify the way Cedar Backup works (for instance, by ignoring the store stage and just letting your backup sit in a staging directory), you can do that. You'll just have to modify the procedure below based on information in the remainder of the manual. Step 1: Decide when you will run your backup. There are four parts to a Cedar Backup run: collect, stage, store and purge. The usual way of setting off these steps is through a set of cron jobs. Although you won't create your cron jobs just yet, you should decide now when you will run your backup so you are prepared for later. Keep in mind that you do not necessarily have to run the collect action on the master. See notes further below for more information. Backing up large directories and creating ISO filesystem images can be intensive operations, and could slow your computer down significantly. Choose a backup time that will not interfere with normal use of your computer. Usually, you will want the backup to occur every day, but it is possible to configure cron to execute the backup only one day per week, three days per week, etc. Warning Because of the way Cedar Backup works, you must ensure that your backup always runs on the first day of your configured week. This is because Cedar Backup will only clear incremental backup information and re-initialize your media when running on the first day of the week. If you skip running Cedar Backup on the first day of the week, your backups will likely be ?confused? until the next week begins, or until you re-run the backup using the --full flag. Step 2: Make sure email works. Cedar Backup relies on email for problem notification. This notification works through the magic of cron. Cron will email any output from each job it executes to the user associated with the job. Since by default Cedar Backup only writes output to the terminal if errors occur, this neatly ensures that notification emails will only be sent out if errors occur. In order to receive problem notifications, you must make sure that email works for the user which is running the Cedar Backup cron jobs (typically root). Refer to your distribution's documentation for information on how to configure email on your system. Note that you may prefer to configure root's email to forward to some other user, so you do not need to check the root user's mail in order to see Cedar Backup errors. Step 3: Configure your writer device. Before using Cedar Backup, your writer device must be properly configured. If you have configured your CD/DVD writer hardware to work through the normal filesystem device path, then you just need to know the path to the device on disk (something like /dev/cdrw). Cedar Backup will use the this device path both when talking to a command like cdrecord and when doing filesystem operations like running media validation. Your other option is to configure your CD writer hardware like a SCSI device (either because it is a SCSI device or because you are using some sort of interface that makes it look like one). In this case, Cedar Backup will use the SCSI id when talking to cdrecord and the device path when running filesystem operations. See the section called ?Configuring your Writer Device? for more information on writer devices and how they are configured. Note There is no need to set up your CD/DVD device if you have decided not to execute the store action. Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be used for CD writers, not DVD writers. Step 4: Configure your backup user. Choose a user to be used for backups. Some platforms may come with a ?ready made? backup user. For other platforms, you may have to create a user yourself. You may choose any id you like, but a descriptive name such as backup or cback is a good choice. See your distribution's documentation for information on how to add a user. Note Standard Debian systems come with a user named backup. You may choose to stay with this user or create another one. Once you have created your backup user, you must create an SSH keypair for it. Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa: user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/user/.ssh'. Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: 11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine The default permissions for this directory should be fine. However, if the directory existed before you ran ssh-keygen, then you may need to modify the permissions. Make sure that the ~/.ssh directory is readable only by the backup user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is writable only by the backup user (i.e. mode 600 or mode 644). If you have other preferences or standard ways of setting up your users' SSH configuration (i.e. different key type, etc.), feel free to do things your way. The important part is that the master must be able to SSH into a client with no password entry required. Step 5: Create your backup tree. Cedar Backup requires a backup directory tree on disk. This directory tree must be roughly large enough hold twice as much data as will be backed up from the entire pool on a given night, plus space for whatever is collected on the master itself. This will allow for all three operations - collect, stage and store - to have enough space to complete. Note that if you elect not to purge the staging directory every night, you will need even more space. You should create a collect directory, a staging directory and a working (temporary) directory. One recommended layout is this: /opt/ backup/ collect/ stage/ tmp/ If you will be backing up sensitive information (i.e. password files), it is recommended that these directories be owned by the backup user (whatever you named it), with permissions 700. Note You don't have to use /opt as the root of your directory structure. Use anything you would like. I use /opt because it is my ?dumping ground? for filesystems that Debian does not manage. Some users have requested that the Debian packages set up a more ?standard? location for backups right out-of-the-box. I have resisted doing this because it's difficult to choose an appropriate backup location from within the package. If you would prefer, you can create the backup directory structure within some existing Debian directory such as /var/backups or /var/tmp. Step 6: Create the Cedar Backup configuration file. Following the instructions in the section called ?Configuration File Format? (above), create a configuration file for your machine. Since you are working with a master machine, you would typically configure all four action-specific sections: collect, stage, store and purge. Note Note that the master can treat itself as a ?client? peer for certain actions. As an example, if you run the collect action on the master, then you will stage that data by configuring a local peer representing the master. Something else to keep in mind is that you do not really have to run the collect action on the master. For instance, you may prefer to just use your master machine as a ?consolidation point? machine that just collects data from the other client machines in a backup pool. In that case, there is no need to collect data on the master itself. The usual location for the Cedar Backup config file is /etc/cback.conf. If you change the location, make sure you edit your cronjobs (below) to point the cback script at the correct config file (using the --config option). Warning Configuration files should always be writable only by root (or by the file owner, if the owner is not root). If you intend to place confidental information into the Cedar Backup configuration file, make sure that you set the filesystem permissions on the file appropriately. For instance, if you configure any extensions that require passwords or other similar information, you should make the file readable only to root or to the file owner (if the owner is not root). Step 7: Validate the Cedar Backup configuration file. Use the command cback validate to validate your configuration file. This command checks that the configuration file can be found and parsed, and also checks for typical configuration problems, such as invalid CD/DVD device entries. This command only validates configuration on the master, not any clients that the master might be configured to connect to. Note: the most common cause of configuration problems is in not closing XML tags properly. Any XML tag that is ?opened? must be ?closed? appropriately. Step 8: Test connectivity to client machines. This step must wait until after your client machines have been at least partially configured. Once the backup user(s) have been configured on the client machine(s) in a pool, attempt an SSH connection to each client. Log in as the backup user on the master, and then use the command ssh user@machine where user is the name of backup user on the client machine, and machine is the name of the client machine. If you are able to log in successfully to each client without entering a password, then things have been configured properly. Otherwise, double-check that you followed the user setup instructions for the master and the clients. Step 9: Test your backup. Make sure that you have configured all of the clients in your backup pool. On all of the clients, execute cback --full collect. (You will probably have already tested this command on each of the clients, so it should succeed.) When all of the client backups have completed, place a valid CD/DVD disc in your drive, and then use the command cback --full all. You should execute this command as root. If the command completes with no output, then the backup was run successfully. Just to be sure that everything worked properly, check the logfile (/var/log/ cback.log) on the master and each of the clients, and also mount the CD/DVD disc on the master to be sure it can be read. You may also want to run cback purge on the master and each client once you have finished validating that everything worked. If Cedar Backup ever completes ?normally? but the disc that is created is not usable, please report this as a bug. ^[22] To be safe, always enable the consistency check option in the store configuration section. Step 10: Modify the backup cron jobs. Since Cedar Backup should be run as root, you should add a set of lines like this to your /etc/crontab file: 30 00 * * * root cback collect 30 02 * * * root cback stage 30 04 * * * root cback store 30 06 * * * root cback purge You should consider adding the --output or -O switch to your cback command-line in cron. This will result in larger logs, but could help diagnose problems when commands like cdrecord or mkisofs fail mysteriously. You will need to coordinate the collect and purge actions on clients so that their collect actions complete before the master attempts to stage, and so that their purge actions do not begin until after the master has completed staging. Usually, allowing an hour or two between steps should be sufficient. ^[23] Note For general information about using cron, see the manpage for crontab(5). On a Debian system, execution of daily backups is controlled by the file /etc/ cron.d/cedar-backup2. As installed, this file contains several different settings, all commented out. Uncomment the ?Master machine? entries in the file, and change the lines so that the backup goes off when you want it to. Configuring your Writer Device Device Types In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware. Devices identified by by device name For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify in configuration. You can either leave blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations ? for instance, when the media needs to be mounted to run the consistency check. Devices identified by SCSI id Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type. In order to use a SCSI device with Cedar Backup, you must know both the SCSI id and the device name . The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations. A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system. On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in and the SCSI id in , just like for a real SCSI device. You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ?ATA:1,1,1?). Linux Notes On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later). Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a ?method? indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values. However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation. Finding your Linux CD Writer Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path: cdrecord -prcap dev=/dev/cdrom Running this command on my hardware gives output that looks like this (just the top few lines): Device type : Removable CD-ROM Version : 0 Response Format: 2 Capabilities : Vendor_info : 'LITE-ON ' Identification : 'DVDRW SOHW-1673S' Revision : 'JS02' Device seems to be: Generic mmc2 DVD-R/DVD-RW. Drive capabilities, per MMC-3 page 2A: If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into and leave blank. If this doesn't work, you should try to find an ATA or ATAPI device: cdrecord -scanbus dev=ATA cdrecord -scanbus dev=ATAPI On my development system, I get a result that looks something like this for ATA: scsibus1: 1,0,0 100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM 1,1,0 101) * 1,2,0 102) * 1,3,0 103) * 1,4,0 104) * 1,5,0 105) * 1,6,0 106) * 1,7,0 107) * Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into and put the emulated SCSI id (in this case, ATA:1,0,0) into . Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO (http:// www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/ HOWTO/ATA-RAID-HOWTO/index.html) for more information. Mac OS X Notes On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.^ [24] Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the ?automount? function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully. If you are interested in some of my notes about what works and what doesn't on this platform, check out the documentation in the doc/osx directory in the source distribution. Optimized Blanking Stategy When the optimized blanking strategy has not been configured, Cedar Backup uses a simplistic approach: rewritable media is blanked at the beginning of every week, period. Since rewritable media can be blanked only a finite number of times before becoming unusable, some users ? especially users of rewritable DVD media with its large capacity ? may prefer to blank the media less often. If the optimized blanking strategy is configured, Cedar Backup will use a blanking factor and attempt to determine whether future backups will fit on the current media. If it looks like backups will fit, then the media will not be blanked. This feature will only be useful (assuming single disc is used for the whole week's backups) if the estimated total size of the weekly backup is considerably smaller than the capacity of the media (no more than 50% of the total media capacity), and only if the size of the backup can be expected to remain fairly constant over time (no frequent rapid growth expected). There are two blanking modes: daily and weekly. If the weekly blanking mode is set, Cedar Backup will only estimate future capacity (and potentially blank the disc) once per week, on the starting day of the week. If the daily blanking mode is set, Cedar Backup will estimate future capacity (and potentially blank the disc) every time it is run. You should only use the daily blanking mode in conjunction with daily collect configuration, otherwise you will risk losing data. If you are using the daily blanking mode, you can typically set the blanking value to 1.0. This will cause Cedar Backup to blank the media whenever there is not enough space to store the current day's backup. If you are using the weekly blanking mode, then finding the correct blanking factor will require some experimentation. Cedar Backup estimates future capacity based on the configured blanking factor. The disc will be blanked if the following relationship is true: bytes available / (1 + bytes required) ? blanking factor Another way to look at this is to consider the blanking factor as a sort of (upper) backup growth estimate: Total size of weekly backup / Full backup size at the start of the week This ratio can be estimated using a week or two of previous backups. For instance, take this example, where March 10 is the start of the week and March 4 through March 9 represent the incremental backups from the previous week: /opt/backup/staging# du -s 2007/03/* 3040 2007/03/01 3044 2007/03/02 6812 2007/03/03 3044 2007/03/04 3152 2007/03/05 3056 2007/03/06 3060 2007/03/07 3056 2007/03/08 4776 2007/03/09 6812 2007/03/10 11824 2007/03/11 In this case, the ratio is approximately 4: 6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571 To be safe, you might choose to configure a factor of 5.0. Setting a higher value reduces the risk of exceeding media capacity mid-week but might result in blanking the media more often than is necessary. If you run out of space mid-week, then the solution is to run the rebuild action. If this happens frequently, a higher blanking factor value should be used. ------------------------------------------------------------------------------- ^[19] See http://www.xml.com/pub/a/98/10/guide0.html for a basic introduction to XML. ^[20] See the section called ?The Backup Process?, in Chapter2, Basic Concepts . ^[21] See http://docs.python.org/lib/re-syntax.html ^[22] See https://bitbucket.org/cedarsolutions/cedar-backup2/issues. ^[23] See the section called ?Coordination between Master and Clients? in Chapter2, Basic Concepts. ^[24] Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information Chapter6.Official Extensions Table of Contents System Information Extension Amazon S3 Extension Subversion Extension MySQL Extension PostgreSQL Extension Mbox Extension Encrypt Extension Split Extension Capacity Extension System Information Extension The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a ?broken? system. It is intended to be run either immediately before or immediately after the standard collect action. This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2. * Currently-installed Debian packages via dpkg --get-selections * Disk partition information via fdisk -l * System-wide mounted filesystem contents, via ls -laR The Debian-specific information is only collected on systems where /usr/bin/ dpkg exists. To enable this extension, add the following section to the Cedar Backup configuration file: sysinfo CedarBackup2.extend.sysinfo executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own. Amazon S3 Extension The Amazon S3 extension writes data to Amazon S3 cloud storage rather than to physical media. It is intended to replace the store action, but you can also use it alongside the store action if you'd prefer to backup your data in more than one place. This extension must be run after the stage action. The underlying functionality relies on the AWS CLI toolset. Before you use this extension, you need to set up your Amazon S3 account and configure AWS CLI as detailed in Amazons's setup guide. The extension assumes that the backup is being executed as root, and switches over to the configured backup user to run the aws program. So, make sure you configure the AWS CLI tools as the backup user and not root. (This is different than the amazons3 sync tool extension, which executes AWS CLI command as the same user that is running the tool.) When using physical media via the standard store action, there is an implicit limit to the size of a backup, since a backup must fit on a single disc. Since there is no physical media, no such limit exists for Amazon S3 backups. This leaves open the possibility that Cedar Backup might construct an unexpectedly-large backup that the administrator is not aware of. Over time, this might become expensive, either in terms of network bandwidth or in terms of Amazon S3 storage and I/O charges. To mitigate this risk, set a reasonable maximum size using the configuration elements shown below. If the backup fails, you have a chance to review what made the backup larger than you expected, and you can either correct the problem (i.e. remove a large temporary directory that got inadvertently included in the backup) or change configuration to take into account the new "normal" maximum size. You can optionally configure Cedar Backup to encrypt data before sending it to S3. To do that, provide a complete command line using the ${input} and $ {output} variables to represent the original input file and the encrypted output file. This command will be executed as the backup user. For instance, you can use something like this with GPG: /usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input} The GPG mechanism depends on a strong passphrase for security. One way to generate a strong passphrase is using your system random number generator, i.e.: dd if=/dev/urandom count=20 bs=1 | xxd -ps (See StackExchange for more details about that advice.) If you decide to use encryption, make sure you save off the passphrase in a safe place, so you can get at your backup data later if you need to. And obviously, make sure to set permissions on the passphrase file so it can only be read by the backup user. To enable this extension, add the following section to the Cedar Backup configuration file: amazons3 CedarBackup2.extend.amazons3 executeAction 201 This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own amazons3 configuration section. This is an example configuration section with encryption disabled: example.com-backup/staging The following elements are part of the Amazon S3 configuration section: warn_midnite Whether to generate warnings for crossing midnite. This field indicates whether warnings should be generated if the Amazon S3 operation has to cross a midnite boundary in order to find data to write to the cloud. For instance, a warning would be generated if valid data was only found in the day before or day after the current day. Configuration for some users is such that the amazons3 operation will always cross a midnite boundary, so they will not care about this warning. Other users will expect to never cross a boundary, and want to be notified that something ?strange? might have happened. This field is optional. If it doesn't exist, then N will be assumed. Restrictions: Must be a boolean (Y or N). s3_bucket The name of the Amazon S3 bucket that data will be written to. This field configures the S3 bucket that your data will be written to. In S3, buckets are named globally. For uniqueness, you would typically use the name of your domain followed by some suffix, such as example.com-backup. If you want, you can specify a subdirectory within the bucket, such as example.com-backup/staging. Restrictions: Must be non-empty. encrypt Command used to encrypt backup data before upload to S3 If this field is provided, then data will be encrypted before it is uploaded to Amazon S3. You must provide the entire command used to encrypt a file, including the ${input} and ${output} variables. An example GPG command is shown above, but you can use any mechanism you choose. The command will be run as the configured backup user. Restrictions: If provided, must be non-empty. full_size_limit Maximum size of a full backup If this field is provided, then a size limit will be applied to full backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a value as described above, greater than zero. incr_size_limit Maximum size of an incremental backup If this field is provided, then a size limit will be applied to incremental backups. If the total size of the selected staging directory is greater than the limit, then the backup will fail. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a value as described above, greater than zero. Subversion Extension The Subversion Extension is a Cedar Backup extension used to back up Subversion ^[25] version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode. It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup. To enable this extension, add the following section to the Cedar Backup configuration file: subversion CedarBackup2.extend.subversion executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section: incr bzip2 /opt/public/svn/docs /opt/public/svn/web gzip /opt/private/svn daily The following elements are part of the Subversion configuration section: collect_mode Default collect mode. The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts). This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. repository A Subversion repository be collected. This is a subsection which contains information about a specific Subversion repository to be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. repository_dir A Subversion parent repository directory be collected. This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up. This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured. The repository_dir subsection contains the following fields: collect_mode Collect mode for this repository. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this repository. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the Subversion repository to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. ^[21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty MySQL Extension The MySQL Extension is a Cedar Backup extension used to back up MySQL ^[26] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Note This extension always produces a full backup. There is currently no facility for making incremental backups. If/when someone has a need for this and can describe how to do it, I will update this extension or provide another. The backup is done via the mysqldump command included with the MySQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that all configured databases can be backed up by a single user. Often, the ?root? database user will be used. An alternative is to create a separate MySQL ?backup? user and grant that user rights to read (but not write) various databases as needed. This second option is probably your best choice. Warning The extension accepts a username and password in configuration. However, you probably do not want to list those values in Cedar Backup configuration. This is because Cedar Backup will provide these values to mysqldump via the command-line --user and --password switches, which will be visible to other users in the process listing. Instead, you should configure the username and password in one of MySQL's configuration files. Typically, that would be done by putting a stanza like this in /root/.my.cnf: [mysqldump] user = root password = Of course, if you are executing the backup as a user other than root, then you would create the file in that user's home directory instead. As a side note, it is also possible to configure .my.cnf such that Cedar Backup can back up a remote database server: [mysqldump] host = remote.host For this to work, you will also need to grant privileges properly for the user which is executing the backup. See your MySQL documentation for more information about how this can be done. Regardless of whether you are using ~/.my.cnf or /etc/cback.conf to store database login and password information, you should be careful about who is allowed to view that information. Typically, this means locking down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: mysql CedarBackup2.extend.mysql executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mysql configuration section. This is an example MySQL configuration section: bzip2 Y If you have decided to configure login information in Cedar Backup rather than using MySQL configuration, then you would add the username and password fields to configuration: root password bzip2 Y The following elements are part of the MySQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. Typically, this would be root (i.e. the database root user, not the system root user). This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. password Password associated with the database user. This value is optional. You should probably configure the username and password in MySQL configuration instead, as discussed above. Restrictions: If provided, must be non-empty. compress_mode Compress mode. MySQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all MySQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. PostgreSQL Extension Community-contributed Extension This is a community-contributed extension provided by Antoine Beaupre ("The Anarcat"). I have added regression tests around the configuration parsing code and I will maintain this section in the user manual based on his source code documentation. Unfortunately, I don't have any PostgreSQL databases with which to test the functional code. While I have code-reviewed the code and it looks both sensible and safe, I have to rely on the author to ensure that it works properly. The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL ^[27] databases via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. The backup is done via the pg_dump or pg_dumpall commands included with the PostgreSQL product. Output can be compressed using gzip or bzip2. Administrators can configure the extension either to back up all databases or to back up only specific databases. The extension assumes that the current user has passwordless access to the database since there is no easy way to pass a password to the pg_dump client. This can be accomplished using appropriate configuration in the pg_hda.conf file. This extension always produces a full backup. There is currently no facility for making incremental backups. Warning Once you place PostgreSQL configuration into the Cedar Backup configuration file, you should be careful about who is allowed to see that information. This is because PostgreSQL configuration will contain information about available PostgreSQL databases and usernames. Typically, you might want to lock down permissions so that only the file owner can read the file contents (i.e. use mode 0600). To enable this extension, add the following section to the Cedar Backup configuration file: postgresql CedarBackup2.extend.postgresql executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own postgresql configuration section. This is an example PostgreSQL configuration section: bzip2 username Y If you decide to back up specific databases, then you would list them individually, like this: bzip2 username N db1 db2 The following elements are part of the PostgreSQL configuration section: user Database user. The database user that the backup should be executed as. Even if you list more than one database (below) all backups must be done as the same user. This value is optional. Consult your PostgreSQL documentation for information on how to configure a default database user outside of Cedar Backup, and for information on how to specify a database password when you configure a user within Cedar Backup. You will probably want to modify pg_hda.conf. Restrictions: If provided, must be non-empty. compress_mode Compress mode. PostgreSQL databases dumps are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. Restrictions: Must be one of none, gzip or bzip2. all Indicates whether to back up all databases. If this value is Y, then all PostgreSQL databases will be backed up. If this value is N, then one or more specific databases must be specified (see below). If you choose this option, the entire database backup will go into one big dump file. Restrictions: Must be a boolean (Y or N). database Named database to be backed up. If you choose to specify individual databases rather than all databases, then each database will be backed up into its own dump file. This field can be repeated as many times as is necessary. At least one database must be configured if the all option (above) is set to N. You may not configure any individual databases if the all option is set to Y. Restrictions: Must be non-empty. Mbox Extension The Mbox Extension is a Cedar Backup extension used to incrementally back up UNIX-style ?mbox? mail folders via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action. Mbox mail folders are not well-suited to being backed up by the normal Cedar Backup incremental backup process. This is because active folders are typically appended to on a daily basis. This forces the incremental backup process to back them up every day in order to avoid losing data. This can result in quite a bit of wasted space when backing up large mail folders. What the mbox extension does is leverage the grepmail utility to back up only email messages which have been received since the last incremental backup. This way, even if a folder is added to every day, only the recently-added messages are backed up. This can potentially save a lot of space. Each configured mbox file or directory can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2. To enable this extension, add the following section to the Cedar Backup configuration file: mbox CedarBackup2.extend.mbox executeAction 99 This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own mbox configuration section. This is an example mbox configuration section: incr gzip /home/user1/mail/greylist daily /home/user2/mail /home/user3/mail spam .*debian.* Configuration is much like the standard collect action. Differences come from the fact that mbox directories are not collected recursively. Unlike collect configuration, exclusion information can only be configured at the mbox directory level (there are no global exclusions). Another difference is that no absolute exclusion paths are allowed ? only relative path exclusions and patterns. The following elements are part of the mbox configuration section: collect_mode Default collect mode. The collect mode describes how frequently an mbox file or directory is backed up. The mbox extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts). This value is the collect mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data. Restrictions: Must be one of daily, weekly or incr. compress_mode Default compress mode. Mbox file or directory backups are just text, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all. This value is the compress mode that will be used by default during the backup process. Individual files or directories (below) may override this value. If all individual files or directories provide their own value, then this default value may be omitted from configuration. Restrictions: Must be one of none, gzip or bzip2. file An individual mbox file to be collected. This is a subsection which contains information about an individual mbox file to be backed up. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The file subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox file to back up. Restrictions: Must be an absolute path. dir An mbox directory to be collected. This is a subsection which contains information about an mbox directory to be backed up. An mbox directory is a directory containing mbox files. Every file in an mbox directory is assumed to be an mbox file. Mbox directories are not collected recursively. Only the files immediately within the configured directory will be backed-up and any subdirectories will be ignored. This section can be repeated as many times as is necessary. At least one mbox file or directory must be configured. The dir subsection contains the following fields: collect_mode Collect mode for this file. This field is optional. If it doesn't exist, the backup will use the default collect mode. Restrictions: Must be one of daily, weekly or incr. compress_mode Compress mode for this file. This field is optional. If it doesn't exist, the backup will use the default compress mode. Restrictions: Must be one of none, gzip or bzip2. abs_path Absolute path of the mbox directory to back up. Restrictions: Must be an absolute path. exclude List of paths or patterns to exclude from the backup. This is a subsection which contains a set of paths and patterns to be excluded within this mbox directory. This section is entirely optional, and if it exists can also be empty. The exclude subsection can contain one or more of each of the following fields: rel_path A relative path to be excluded from the backup. The path is assumed to be relative to the mbox directory itself. For instance, if the configured mbox directory is /home/user2/mail a configured relative path of SPAM would exclude the path /home/ user2/mail/SPAM. This field can be repeated as many times as is necessary. Restrictions: Must be non-empty. pattern A pattern to be excluded from the backup. The pattern must be a Python regular expression. ^[21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $). This field can be repeated as many times as is necessary. Restrictions: Must be non-empty Encrypt Extension The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It does this by encrypting the contents of a master's staging directory each day after the stage action is run. This way, backed-up data is encrypted both when sitting on the master and when written to disc. This extension must be run before the standard store action, otherwise unencrypted data will be written to disc. There are several differents ways encryption could have been built in to or layered on to Cedar Backup. I asked the mailing list for opinions on the subject in January 2007 and did not get a lot of feedback, so I chose the option that was simplest to understand and simplest to implement. If other encryption use cases make themselves known in the future, this extension can be enhanced or replaced. Currently, this extension supports only GPG. However, it would be straightforward to support other public-key encryption mechanisms, such as OpenSSL. Warning If you decide to encrypt your backups, be absolutely sure that you have your GPG secret key saved off someplace safe ? someplace other than on your backup disc. If you lose your secret key, your backup will be useless. I suggest that before you rely on this extension, you should execute a dry run and make sure you can successfully decrypt the backup that is written to disc. Before configuring the Encrypt extension, you must configure GPG. Either create a new keypair or use an existing one. Determine which user will execute your backup (typically root) and have that user import and lsign the public half of the keypair. Then, save off the secret half of the keypair someplace safe, apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know the recipient name associated with the public key because you'll need it to configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and it executes cleanly with no user interaction required, you should be OK.) An encrypted backup has the same file structure as a normal backup, so all of the instructions in AppendixC, Data Recovery apply. The only difference is that encrypted files will have an additional .gpg extension (so for instance file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on as a user which has access to the secret key and decrypt the .gpg file that you are interested in. Then, recover the data as usual. Note: I am being intentionally vague about how to configure and use GPG, because I do not want to encourage neophytes to blindly use this extension. If you do not already understand GPG well enough to follow the two paragraphs above, do not use this extension. Instead, before encrypting your backups, check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/ manual.html and gain an understanding of how encryption can help you or hurt you. To enable this extension, add the following section to the Cedar Backup configuration file: encrypt CedarBackup2.extend.encrypt executeAction 301 This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own encrypt configuration section. This is an example Encrypt configuration section: gpg Backup User The following elements are part of the Encrypt configuration section: encrypt_mode Encryption mode. This value specifies which encryption mechanism will be used by the extension. Currently, only the GPG public-key encryption mechanism is supported. Restrictions: Must be gpg. encrypt_target Encryption target. The value in this field is dependent on the encryption mode. For the gpg mode, this is the name of the recipient whose public key will be used to encrypt the backup data, i.e. the value accepted by gpg -r. Split Extension The Split Extension is a Cedar Backup extension used to split up large files within staging directories. It is probably only useful in combination with the cback-span command, which requires individual files within staging directories to each be smaller than a single disc. You would normally run this action immediately after the standard stage action, but you could also choose to run it by hand immediately before running cback-span. The split extension uses the standard UNIX split tool to split the large files up. This tool simply splits the files on bite-size boundaries. It has no knowledge of file formats. Note: this means that in order to recover the data in your original large file, you must have every file that the original file was split into. Think carefully about whether this is what you want. It doesn't sound like a huge limitation. However, cback-span might put an indivdual file on any disc in a set ? the files split from one larger file will not necessarily be together. That means you will probably need every disc in your backup set in order to recover any data from the backup set. To enable this extension, add the following section to the Cedar Backup configuration file: split CedarBackup2.extend.split executeAction 299 This extension relies on the options and staging configuration sections in the standard Cedar Backup configuration file, and then also requires its own split configuration section. This is an example Split configuration section: 250 MB 100 MB The following elements are part of the Split configuration section: size_limit Size limit. Files with a size strictly larger than this limit will be split by the extension. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a size as described above. split_size Split size. This is the size of the chunks that a large file will be split into. The final chunk may be smaller if the split size doesn't divide evenly into the file size. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. Restrictions: Must be a size as described above. Capacity Extension The capacity extension checks the current capacity of the media in the writer and prints a warning if the media exceeds an indicated capacity. The capacity is indicated either by a maximum percentage utilized or by a minimum number of bytes that must remain unused. This action can be run at any time, but is probably best run as the last action on any given day, so you get as much notice as possible that your media is full and needs to be replaced. To enable this extension, add the following section to the Cedar Backup configuration file: capacity CedarBackup2.extend.capacity executeAction 299 This extension relies on the options and store configuration sections in the standard Cedar Backup configuration file, and then also requires its own capacity configuration section. This is an example Capacity configuration section that configures the extension to warn if the media is more than 95.5% full: 95.5 This example configures the extension to warn if the media has fewer than 16 MB free: 16 MB The following elements are part of the Capacity configuration section: max_percentage Maximum percentage of the media that may be utilized. You must provide either this value or the min_bytes value. Restrictions: Must be a floating point number between 0.0 and 100.0 min_bytes Minimum number of free bytes that must be available. You can enter this value in two different forms. It can either be a simple number, in which case the value is assumed to be in bytes; or it can be a number followed by a unit (KB, MB, GB). Valid examples are ?10240?, ?250 MB? or ?1.1 GB?. You must provide either this value or the max_percentage value. Restrictions: Must be a byte quantity as described above. ------------------------------------------------------------------------------- ^[25] See http://subversion.org ^[26] See http://www.mysql.com ^[27] See http://www.postgresql.org/ AppendixA.Extension Architecture Interface The Cedar Backup Extension Architecture Interface is the application programming interface used by third-party developers to write Cedar Backup extensions. This appendix briefly specifies the interface in enough detail for someone to succesfully implement an extension. You will recall that Cedar Backup extensions are third-party pieces of code which extend Cedar Backup's functionality. Extensions can be invoked from the Cedar Backup command line and are allowed to place their configuration in Cedar Backup's configuration file. There is a one-to-one mapping between a command-line extended action and an extension function. The mapping is configured in the Cedar Backup configuration file using a section something like this: database foo bar 101 In this case, the action ?database? has been mapped to the extension function foo.bar(). Extension functions can take any actions they would like to once they have been invoked, but must abide by these rules: 1. Extensions may not write to stdout or stderr using functions such as print or sys.write. 2. All logging must take place using the Python logging facility. Flow-of-control logging should happen on the CedarBackup2.log topic. Authors can assume that ERROR will always go to the terminal, that INFO and WARN will always be logged, and that DEBUG will be ignored unless debugging is enabled. 3. Any time an extension invokes a command-line utility, it must be done through the CedarBackup2.util.executeCommand function. This will help keep Cedar Backup safer from format-string attacks, and will make it easier to consistently log command-line process output. 4. Extensions may not return any value. 5. Extensions must throw a Python exception containing a descriptive message if processing fails. Extension authors can use their judgement as to what constitutes failure; however, any problems during execution should result in either a thrown exception or a logged message. 6. Extensions may rely only on Cedar Backup functionality that is advertised as being part of the public interface. This means that extensions cannot directly make use of methods, functions or values starting with with the _ character. Furthermore, extensions should only rely on parts of the public interface that are documented in the online Epydoc documentation. 7. Extension authors are encouraged to extend the Cedar Backup public interface through normal methods of inheritence. However, no extension is allowed to directly change Cedar Backup code in a way that would affect how Cedar Backup itself executes when the extension has not been invoked. For instance, extensions would not be allowed to add new command-line options or new writer types. 8. Extensions must be written to assume an empty locale set (no $LC_* settings) and $LANG=C. For the typical open-source software project, this would imply writing output-parsing code against the English localization (if any). The executeCommand function does sanitize the environment to enforce this configuration. Extension functions take three arguments: the path to configuration on disk, a CedarBackup2.cli.Options object representing the command-line options in effect, and a CedarBackup2.config.Config object representing parsed standard configuration. def function(configPath, options, config): """Sample extension function.""" pass This interface is structured so that simple extensions can use standard configuration without having to parse it for themselves, but more complicated extensions can get at the configuration file on disk and parse it again as needed. The interface to the CedarBackup2.cli.Options and CedarBackup2.config.Config classes has been thoroughly documented using Epydoc, and the documentation is available on the Cedar Backup website. The interface is guaranteed to change only in backwards-compatible ways unless the Cedar Backup major version number is bumped (i.e. from 2 to 3). If an extension needs to add its own configuration information to the Cedar Backup configuration file, this extra configuration must be added in a new configuration section using a name that does not conflict with standard configuration or other known extensions. For instance, our hypothetical database extension might require configuration indicating the path to some repositories to back up. This information might go into a section something like this: /path/to/repo1 /path/to/repo2 In order to read this new configuration, the extension code can either inherit from the Config object and create a subclass that knows how to parse the new database config section, or can write its own code to parse whatever it needs out of the file. Either way, the resulting code is completely independent of the standard Cedar Backup functionality. AppendixB.Dependencies Python 2.7 Cedar Backup is written in Python 2 and requires version 2.7 or greater of the language. Python 2.7 was originally released on 4 Jul 2010, and is the last supported release of Python 2. As of this writing, all current Linux and BSD distributions include it. +------------------------------------------------------------------+ | Source | URL | |--------+---------------------------------------------------------| |upstream|http://www.python.org | |--------+---------------------------------------------------------| |Debian |http://packages.debian.org/stable/python/python2.7 | |--------+---------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=python| +------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. RSH Server and Client Although Cedar Backup will technically work with any RSH-compatible server and client pair (such as the classic ?rsh? client), most users should only use an SSH (secure shell) server and client. The defacto standard today is OpenSSH. Some systems package the server and the client together, and others package the server and the client separately. Note that master nodes need an SSH client, and client nodes need to run an SSH server. +-------------------------------------------------------------------+ | Source | URL | |--------+----------------------------------------------------------| |upstream|http://www.openssh.com/ | |--------+----------------------------------------------------------| |Debian |http://packages.debian.org/stable/net/ssh | |--------+----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=openssh| +-------------------------------------------------------------------+ If you can't find SSH client or server packages for your system, install from the package source, using the ?upstream? link. mkisofs The mkisofs command is used create ISO filesystem images that can later be written to backup media. On Debian platforms, mkisofs is not distributed and genisoimage is used instead. The Debian package takes care of this for you. +-------------------------------------------------------------------+ | Source | URL | |--------+----------------------------------------------------------| |upstream|https://en.wikipedia.org/wiki/Cdrtools | |--------+----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=mkisofs| +-------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. cdrecord The cdrecord command is used to write ISO images to CD media in a backup device. On Debian platforms, cdrecord is not distributed and wodim is used instead. The Debian package takes care of this for you. +--------------------------------------------------------------------+ | Source | URL | |--------+-----------------------------------------------------------| |upstream|https://en.wikipedia.org/wiki/Cdrtools | |--------+-----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=cdrecord| +--------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. dvd+rw-tools The dvd+rw-tools package provides the growisofs utility, which is used to write ISO images to DVD media in a backup device. +------------------------------------------------------------------------+ | Source | URL | |--------+---------------------------------------------------------------| |upstream|http://fy.chalmers.se/~appro/linux/DVD+RW/ | |--------+---------------------------------------------------------------| |Debian |http://packages.debian.org/stable/utils/dvd+rw-tools | |--------+---------------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=dvd+rw-tools| +------------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. eject and volname The eject command is used to open and close the tray on a backup device (if the backup device has a tray). Sometimes, the tray must be opened and closed in order to "reset" the device so it notices recent changes to a disc. The volname command is used to determine the volume name of media in a backup device. +-----------------------------------------------------------------+ | Source | URL | |--------+--------------------------------------------------------| |upstream|http://sourceforge.net/projects/eject | |--------+--------------------------------------------------------| |Debian |http://packages.debian.org/stable/utils/eject | |--------+--------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=eject| +-----------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. mount and umount The mount and umount commands are used to mount and unmount CD/DVD media after it has been written, in order to run a consistency check. +-----------------------------------------------------------------+ | Source | URL | |--------+--------------------------------------------------------| |upstream|https://www.kernel.org/pub/linux/utils/util-linux/ | |--------+--------------------------------------------------------| |Debian |http://packages.debian.org/stable/base/mount | |--------+--------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=mount| +-----------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. grepmail The grepmail command is used by the mbox extension to pull out only recent messages from mbox mail folders. +--------------------------------------------------------------------+ | Source | URL | |--------+-----------------------------------------------------------| |upstream|http://sourceforge.net/projects/grepmail/ | |--------+-----------------------------------------------------------| |Debian |http://packages.debian.org/stable/mail/grepmail | |--------+-----------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=grepmail| +--------------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. gpg The gpg command is used by the encrypt extension to encrypt files. +-----------------------------------------------------------------+ | Source | URL | |--------+--------------------------------------------------------| |upstream|https://www.gnupg.org/ | |--------+--------------------------------------------------------| |Debian |http://packages.debian.org/stable/utils/gnupg | |--------+--------------------------------------------------------| |RPM |http://rpmfind.net/linux/rpm2html/search.php?query=gnupg| +-----------------------------------------------------------------+ If you can't find a package for your system, install from the package source, using the ?upstream? link. split The split command is used by the split extension to split up large files. This command is typically part of the core operating system install and is not distributed in a separate package. AWS CLI AWS CLI is Amazon's official command-line tool for interacting with the Amazon Web Services infrastruture. Cedar Backup uses AWS CLI to copy backup data up to Amazon S3 cloud storage. After you install AWS CLI, you need to configure your connection to AWS with an appropriate access id and access key. Amazon provides a good setup guide. +--------------------------------------------------+ | Source | URL | |--------+-----------------------------------------| |upstream|http://aws.amazon.com/documentation/cli/ | |--------+-----------------------------------------| |Debian |https://packages.debian.org/stable/awscli| +--------------------------------------------------+ The initial implementation of the amazons3 extension was written using AWS CLI 1.4. As of this writing, not all Linux distributions include a package for this version. On these platforms, the easiest way to install it is via PIP: apt-get install python-pip, and then pip install awscli. The Debian package includes an appropriate dependency starting with the jessie release. Chardet The cback-amazons3-sync command relies on the Chardet python package to check filename encoding. You only need this package if you are going to use the sync tool. +----------------------------------------------------------+ | Source | URL | |--------+-------------------------------------------------| |upstream|https://github.com/chardet/chardet | |--------+-------------------------------------------------| |debian |https://packages.debian.org/stable/python-chardet| +----------------------------------------------------------+ AppendixC.Data Recovery Table of Contents Finding your Data Recovering Filesystem Data Full Restore Partial Restore Recovering MySQL Data Recovering Subversion Data Recovering Mailbox Data Recovering Data split by the Split Extension Finding your Data The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.) Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name. This is the root directory of my example disc: root:/mnt/cdrw# ls -l total 4 drwxr-x--- 3 backup backup 4096 Sep 01 06:30 2005/ In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006). Within each year directory is one subdirectory for each month represented in the backup. root:/mnt/cdrw/2005# ls -l total 2 dr-xr-xr-x 6 root root 2048 Sep 11 05:30 09/ In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005). Within each month directory is one subdirectory for each day represented in the backup. root:/mnt/cdrw/2005/09# ls -l total 8 dr-xr-xr-x 5 root root 2048 Sep 7 05:30 07/ dr-xr-xr-x 5 root root 2048 Sep 8 05:30 08/ dr-xr-xr-x 5 root root 2048 Sep 9 05:30 09/ dr-xr-xr-x 5 root root 2048 Sep 11 05:30 11/ Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven. Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup: root:/mnt/cdrw/2005/09/07# ls -l total 10 dr-xr-xr-x 2 root root 2048 Sep 7 02:31 host1/ -r--r--r-- 1 root root 0 Sep 7 03:27 cback.stage dr-xr-xr-x 2 root root 4096 Sep 7 02:30 host2/ dr-xr-xr-x 2 root root 4096 Sep 7 03:23 host3/ In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27. Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files ?collected? from Cedar Backup extensions or by other third-party processes on your system. root:/mnt/cdrw/2005/09/07/host1# ls -l total 157976 -r--r--r-- 1 root root 11206159 Sep 7 02:30 boot.tar.bz2 -r--r--r-- 1 root root 0 Sep 7 02:30 cback.collect -r--r--r-- 1 root root 3199 Sep 7 02:30 dpkg-selections.txt.bz2 -r--r--r-- 1 root root 908325 Sep 7 02:30 etc.tar.bz2 -r--r--r-- 1 root root 389 Sep 7 02:30 fdisk-l.txt.bz2 -r--r--r-- 1 root root 1003100 Sep 7 02:30 ls-laR.txt.bz2 -r--r--r-- 1 root root 19800 Sep 7 02:30 mysqldump.txt.bz2 -r--r--r-- 1 root root 4133372 Sep 7 02:30 opt-local.tar.bz2 -r--r--r-- 1 root root 44794124 Sep 8 23:34 opt-public.tar.bz2 -r--r--r-- 1 root root 30028057 Sep 7 02:30 root.tar.bz2 -r--r--r-- 1 root root 4747070 Sep 7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2 -r--r--r-- 1 root root 603863 Sep 7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2 -r--r--r-- 1 root root 113484 Sep 7 02:30 var-lib-jspwiki.tar.bz2 -r--r--r-- 1 root root 19556660 Sep 7 02:30 var-log.tar.bz2 -r--r--r-- 1 root root 14753855 Sep 7 02:30 var-mail.tar.bz2 As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent. Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before ?.tar.bz2?), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki. The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension. The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the ?all? flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2). Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Recovering Filesystem Data Filesystem data is gathered by the standard Cedar Backup collect action. This data is placed into files of the form *.tar. The first part of the name (before ?.tar?), represents the path to the directory. For example, boot.tar would contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/ lib/jspwiki. (As a special case, data from the root directory would be placed in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz) extension, depending on what compression you specified in configuration. If you are using full backups every day, the latest backup data is always within the latest daily directory stored on your backup media or within your staging directory. If you have some or all of your directories configured to do incremental backups, then the first day of the week holds the full backups and the other days represent incremental differences relative to that first day of the week. Where to extract your backup If you are restoring a home directory or some other non-system directory as part of a full restore, it is probably fine to extract the backup directly into the filesystem. If you are restoring a system directory like /etc as part of a full restore, extracting directly into the filesystem is likely to break things, especially if you re-installed a newer version of your operating system than the one you originally backed up. It's better to extract directories like this to a temporary location and pick out only the files you find you need. When doing a partial restore, I suggest always extracting to a temporary location. Doing it this way gives you more control over what you restore, and helps you avoid compounding your original problem with another one (like overwriting the wrong file, oops). Full Restore To do a full system restore, find the newest applicable full backup and extract it. If you have some incremental backups, extract them into the same place as the full backup, one by one starting from oldest to newest. (This way, if a file changed every day you will always get the latest one.) All of the backed-up files are stored in the tar file in a relative fashion, so you can extract from the tar file either directly into the filesystem, or into a temporary location. For example, to restore boot.tar.bz2 directly into /boot, execute tar from your root directory (/): root:/# bzcat boot.tar.bz2 | tar xvf - Of course, use zcat or just cat, depending on what kind of compression is in use. If you want to extract boot.tar.gz into a temporary location like /tmp/boot instead, just change directories first. In this case, you'd execute the tar command from within /tmp instead of /. root:/tmp# bzcat boot.tar.bz2 | tar xvf - Again, use zcat or just cat as appropriate. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Partial Restore Most users will need to do a partial restore much more frequently than a full restore. Perhaps you accidentally removed your home directory, or forgot to check in some version of a file before deleting it. Or, perhaps the person who packaged Apache for your system blew away your web server configuration on upgrade (it happens). The solution to these and other kinds of problems is a partial restore (assuming you've backed up the proper things). The procedure is similar to a full restore. The specific steps depend on how much information you have about the file you are looking for. Where with a full restore, you can confidently extract the full backup followed by each of the incremental backups, this might not be what you want when doing a partial restore. You may need to take more care in finding the right version of a file ? since the same file, if changed frequently, would appear in more than one backup. Start by finding the backup media that contains the file you are looking for. If you rotate your backup media, and your last known ?contact? with the file was a while ago, you may need to look on older media to find it. This may take some effort if you are not sure when the change you are trying to correct took place. Once you have decided to look at a particular piece of backup media, find the correct peer (host), and look for the file in the full backup: root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file Of course, use zcat or just cat, depending on what kind of compression is in use. The tvf tells tar to search for the file in question and just list the results rather than extracting the file. Note that the filename is relative (with no starting /). Alternately, you can omit the path/to/file and search through the output using more or less If you haven't found what you are looking for, work your way through the incremental files for the directory in question. One of them may also have the file if it changed during the course of the backup. Or, move to older or newer media and see if you can find the file there. Once you have found your file, extract it using xvf: root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file Again, use zcat or just cat as appropriate. Inspect the file and make sure it's what you're looking for. Again, you may need to move to older or newer media to find the exact version of your file. For more information, you might want to check out the manpage or GNU info documentation for the tar command. Recovering MySQL Data MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup. Warning I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it! MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure. First, find the backup you are interested in. If you have specified ?all databases? in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. If you are restoring an ?all databases? backup, make sure that you have correctly created the root user and know its password. Then, execute: daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root Of course, use zcat or just cat, depending on what kind of compression is in use. Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them. If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore: daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database Again, use zcat or just cat as appropriate. For more information on using MySQL, see the documentation on the MySQL web site, http://mysql.org/, or the manpages for the mysql and mysqldump commands. Recovering Subversion Data Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc. Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show. Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is ?backend-agnostic?. root:/tmp# svnadmin create --fs-type=fsfs testrepo Next, load the full backup into the repository: root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Of course, use zcat or just cat, depending on what kind of compression is in use. Follow that with loads for each of the incremental backups: root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo Again, use zcat or just cat as appropriate. When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800). Note Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content. For more information on using Subversion, see the book Version Control with Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http:// subversion.tigris.org/faq.html). Recovering Mailbox Data Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will create either full or incremental backups, but both kinds of backups are treated identically when restoring. Individual mbox files and mbox directories are treated a little differently, since individual files are just compressed, but directories are collected into a tar archive. First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week. The mbox extension creates files of the form mbox-*. Backup files for individual mbox files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. Backup files for mbox directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on what kind of compression you specified in configuration. There is one backup file for each configured mbox file or directory. The backup file name represents the name of the file or directory and the date it was backed up. So, the file mbox-20060624-home-user-mail-greylist represents the backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise, mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail directory run on that same date. Once you have found the files you are looking for, the restoration procedure is fairly simple. First, concatenate all of the backup files together. Then, use grepmail to eliminate duplicate messages (if any). Here is an example for a single backed-up file: root:/tmp# rm restore.mbox # make sure it's not left over root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox root:/tmp# grepmail -a -u restore.mbox > nodups.mbox At this point, nodups.mbox contains all of the backed-up messages from /home/ user/mail/greylist. Of course, if your backups are compressed, you'll have to use zcat or bzcat rather than just cat. If you are backing up mbox directories rather than individual files, see the filesystem instructions for notes on now to extract the individual files from inside tar archives. Extract the files you are interested in, and then concatenate them together just like shown above for the individual case. Recovering Data split by the Split Extension The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback-span command. The split up files are not difficult to work with. Simply find all of the files ? which could be split between multiple discs ? and concatenate them together. root:/tmp# rm usr-src-software.tar.gz # make sure it's not there root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz Then, use the resulting file like usual. Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include). AppendixD.Securing Password-less SSH Connections Cedar Backup relies on password-less public key SSH connections to make various parts of its backup process work. Password-less scp is used to stage files from remote clients to the master, and password-less ssh is used to execute actions on managed clients. Normally, it is a good idea to avoid password-less SSH connections in favor of using an SSH agent. The SSH agent manages your SSH connections so that you don't need to type your passphrase over and over. You get most of the benefits of a password-less connection without the risk. Unfortunately, because Cedar Backup has to execute without human involvement (through a cron job), use of an agent really isn't feasable. We have to rely on true password-less public keys to give the master access to the client peers. Traditionally, Cedar Backup has relied on a ?segmenting? strategy to minimize the risk. Although the backup typically runs as root ? so that all parts of the filesystem can be backed up ? we don't use the root user for network connections. Instead, we use a dedicated backup user on the master to initiate network connections, and dedicated users on each of the remote peers to accept network connections. With this strategy in place, an attacker with access to the backup user on the master (or even root access, really) can at best only get access to the backup user on the remote peers. We still concede a local attack vector, but at least that vector is restricted to an unprivileged user. Some Cedar Backup users may not be comfortable with this risk, and others may not be able to implement the segmentation strategy ? they simply may not have a way to create a login which is only used for backups. So, what are these users to do? Fortunately there is a solution. The SSH authorized keys file supports a way to put a ?filter? in place on an SSH connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man 8 sshd: command="command" Specifies that the command is executed whenever this key is used for authentication. The command supplied by the user (if any) is ignored. The command is run on a pty if the client requests a pty; otherwise it is run without a tty. If an 8-bit clean channel is required, one must not request a pty or should specify no-pty. A quote may be included in the command by quoting it with a backslash. This option might be useful to restrict certain public keys to perform just a specific operation. An example might be a key that permits remote backups but nothing else. Note that the client may specify TCP and/or X11 forwarding unless they are explicitly prohibited. Note that this option applies to shell, command or subsystem execution. Essentially, this gives us a way to authenticate the commands that are being executed. We can either accept or reject commands, and we can even provide a readable error message for commands we reject. The filter is applied on the remote peer, to the key that provides the master access to the remote peer. So, let's imagine that we have two hosts: master ?mickey?, and peer ?minnie?. Here is the original ~/.ssh/authorized_keys file for the backup user on minnie (remember, this is all on one line in the file): ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km =m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9= 1-2341=-a0sd=-sa0=1z= backup@mickey This line is the public key that minnie can use to identify the backup user on mickey. Assuming that there is no passphrase on the private key back on mickey, the backup user on mickey can get direct access to minnie. To put the filter in place, we add a command option to the key, like this: command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp 3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey Basically, the command option says that whenever this key is used to successfully initiate a connection, the /opt/backup/validate-backup command will be run instead of the real command that came over the SSH connection. Fortunately, the interface gives the command access to certain shell variables that can be used to invoke the original command if you want to. A very basic validate-backup script might look something like this: #!/bin/bash if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then ${SSH_ORIGINAL_COMMAND} else echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]." exit 1 fi This script allows exactly ls -l and nothing else. If the user attempts some other command, they get a nice error message telling them that their command has been disallowed. For remote commands executed over ssh, the original command is exactly what the caller attempted to invoke. For remote copies, the commands are either scp -f file (copy from the peer to the master) or scp -t file (copy to the peer from the master). If you want, you can see what command SSH thinks it is executing by using ssh -v or scp -v. The command will be right at the top, something like this: Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006 debug1: Reading configuration data /home/backup/.ssh/config debug1: Applying options for daystrom debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 Omit the -v and you have your command: scp -f .profile. For a normal, non-managed setup, you need to allow the following commands, where /path/to/collect/ is replaced with the real path to the collect directory on the remote peer: scp -f /path/to/collect/cback.collect scp -f /path/to/collect/* scp -t /path/to/collect/cback.stage If you are configuring a managed client, then you also need to list the exact command lines that the master will be invoking on the managed client. You are guaranteed that the master will invoke one action at a time, so if you list two lines per action (full and non-full) you should be fine. Here's an example for the collect action: /usr/bin/cback --full collect /usr/bin/cback collect Of course, you would have to list the actual path to the cback executable ? exactly the one listed in the configuration option for your managed peer. I hope that there is enough information here for interested users to implement something that makes them comfortable. I have resisted providing a complete example script, because I think everyone's setup will be different. However, feel free to write if you are working through this and you have questions. AppendixE.Copyright Copyright (c) 2004-2011,2013-2015 Kenneth J. Pronovici This work is free; you can redistribute it and/or modify it under the terms of the GNU General Public License (the "GPL"), Version 2, as published by the Free Software Foundation. For the purposes of the GPL, the "preferred form of modification" for this work is the original Docbook XML text files. If you choose to distribute this work in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents based on the original Docbook XML text files), you must also consider image files to be "source code" if those images are required in order to construct a complete and readable compiled version of the work. This work is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copies of the GNU General Public License are available from the Free Software Foundation website, http://www.gnu.org/. You may also write the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA ==================================================================== GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS ==================================================================== CedarBackup2-2.26.5/doc/manual/ch06.html0000664000175000017500000001103512642035647021237 0ustar pronovicpronovic00000000000000Chapter6.Official Extensions

    Chapter6.Official Extensions

    System Information Extension

    The System Information Extension is a simple Cedar Backup extension used to save off important system recovery information that might be useful when reconstructing a broken system. It is intended to be run either immediately before or immediately after the standard collect action.

    This extension saves off the following information to the configured Cedar Backup collect directory. Saved off data is always compressed using bzip2.

    • Currently-installed Debian packages via dpkg --get-selections

    • Disk partition information via fdisk -l

    • System-wide mounted filesystem contents, via ls -laR

    The Debian-specific information is only collected on systems where /usr/bin/dpkg exists.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>sysinfo</name>
          <module>CedarBackup2.extend.sysinfo</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, but requires no new configuration of its own.

    CedarBackup2-2.26.5/doc/manual/ch02s02.html0000664000175000017500000000626212642035647021566 0ustar pronovicpronovic00000000000000Data Recovery

    Data Recovery

    Cedar Backup does not include any facility to restore backups. Instead, it assumes that the administrator (using the procedures and references in AppendixC, Data Recovery) can handle the task of restoring their own system, using the standard system tools at hand.

    If I were to maintain recovery code in Cedar Backup, I would almost certainly end up in one of two situations. Either Cedar Backup would only support simple recovery tasks, and those via an interface a lot like that of the underlying system tools; or Cedar Backup would have to include a hugely complicated interface to support more specialized (and hence useful) recovery tasks like restoring individual files as of a certain point in time. In either case, I would end up trying to maintain critical functionality that would be rarely used, and hence would also be rarely tested by end-users. I am uncomfortable asking anyone to rely on functionality that falls into this category.

    My primary goal is to keep the Cedar Backup codebase as simple and focused as possible. I hope you can understand how the choice of providing documentation, but not code, seems to strike the best balance between managing code complexity and providing the functionality that end-users need.

    CedarBackup2-2.26.5/doc/manual/apcs06.html0000664000175000017500000000601012642035647021570 0ustar pronovicpronovic00000000000000Recovering Data split by the Split Extension

    Recovering Data split by the Split Extension

    The Split extension takes large files and splits them up into smaller files. Typically, it would be used in conjunction with the cback-span command.

    The split up files are not difficult to work with. Simply find all of the files — which could be split between multiple discs — and concatenate them together.

    root:/tmp# rm usr-src-software.tar.gz  # make sure it's not there
    root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz
    root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz
          

    Then, use the resulting file like usual.

    Remember, you need to have all of the files that the original large file was split into before this will work. If you are missing a file, the result of the concatenation step will be either a corrupt file or a truncated file (depending on which chunks you did not include).

    CedarBackup2-2.26.5/doc/manual/ch04.html0000664000175000017500000001142112642035647021234 0ustar pronovicpronovic00000000000000Chapter4.Command Line Tools

    Chapter4.Command Line Tools

    Overview

    Cedar Backup comes with three command-line programs: cback, cback-amazons3-sync, and cback-span.

    The cback command is the primary command line interface and the only Cedar Backup program that most users will ever need.

    The cback-amazons3-sync tool is used for synchronizing entire directories of files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar Backup process.

    Users who have a lot of data to back up — more than will fit on a single CD or DVD — can use the interactive cback-span tool to split their data between multiple discs.

    CedarBackup2-2.26.5/doc/manual/ch01s02.html0000664000175000017500000000714012642035647021561 0ustar pronovicpronovic00000000000000Migrating from Version 2 to Version 3

    Migrating from Version 2 to Version 3

    The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix-and-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end-of-life in 2020, but you should plan to migrate sooner than that if possible.

    A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used "cback", version 3 uses "cback3": cback3.conf instead of cback.conf, cback3.log instead of cback.log, etc.

    So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup.

    CedarBackup2-2.26.5/doc/manual/ch02s08.html0000664000175000017500000001023312642035647021565 0ustar pronovicpronovic00000000000000Incremental Backups

    Incremental Backups

    Cedar Backup supports three different kinds of backups for individual collect directories. These are daily, weekly and incremental backups. Directories using the daily mode are backed up every day. Directories using the weekly mode are only backed up on the first day of the week, or when the --full option is used. Directories using the incremental mode are always backed up on the first day of the week (like a weekly backup), but after that only the files which have changed are actually backed up on a daily basis.

    In Cedar Backup, incremental backups are not based on date, but are instead based on saved checksums, one for each backed-up file. When a full backup is run, Cedar Backup gathers a checksum value [14] for each backed-up file. The next time an incremental backup is run, Cedar Backup checks its list of file/checksum pairs for each file that might be backed up. If the file's checksum value does not match the saved value, or if the file does not appear in the list of file/checksum pairs, then it will be backed up and a new checksum value will be placed into the list. Otherwise, the file will be ignored and the checksum value will be left unchanged.

    Cedar Backup stores the file/checksum pairs in .sha files in its working directory, one file per configured collect directory. The mappings in these files are reset at the start of the week or when the --full option is used. Because these files are used for an entire week, you should never purge the working directory more frequently than once per week.



    [14] The checksum is actually an SHA cryptographic hash. See Wikipedia for more information: http://en.wikipedia.org/wiki/SHA-1.

    CedarBackup2-2.26.5/doc/manual/ch06s03.html0000664000175000017500000003640612642035647021576 0ustar pronovicpronovic00000000000000Subversion Extension

    Subversion Extension

    The Subversion Extension is a Cedar Backup extension used to back up Subversion [25] version control repositories via the Cedar Backup command line. It is intended to be run either immediately before or immediately after the standard collect action.

    Each configured Subversion repository can be backed using the same collect modes allowed for filesystems in the standard Cedar Backup collect action (weekly, daily, incremental) and the output can be compressed using either gzip or bzip2.

    There are two different kinds of Subversion repositories at this writing: BDB (Berkeley Database) and FSFS (a "filesystem within a filesystem"). This extension backs up both kinds of repositories in the same way, using svnadmin dump in an incremental mode.

    It turns out that FSFS repositories can also be backed up just like any other filesystem directory. If you would rather do the backup that way, then use the normal collect action rather than this extension. If you decide to do that, be sure to consult the Subversion documentation and make sure you understand the limitations of this kind of backup.

    To enable this extension, add the following section to the Cedar Backup configuration file:

    <extensions>
       <action>
          <name>subversion</name>
          <module>CedarBackup2.extend.subversion</module>
          <function>executeAction</function>
          <index>99</index>
       </action>
    </extensions>
          

    This extension relies on the options and collect configuration sections in the standard Cedar Backup configuration file, and then also requires its own subversion configuration section. This is an example Subversion configuration section:

    <subversion>
       <collect_mode>incr</collect_mode>
       <compress_mode>bzip2</compress_mode>
       <repository>
          <abs_path>/opt/public/svn/docs</abs_path>
       </repository>
       <repository>
          <abs_path>/opt/public/svn/web</abs_path>
          <compress_mode>gzip</compress_mode>
       </repository>
       <repository_dir>
          <abs_path>/opt/private/svn</abs_path>
          <collect_mode>daily</collect_mode>
       </repository_dir>
    </subversion>
          

    The following elements are part of the Subversion configuration section:

    collect_mode

    Default collect mode.

    The collect mode describes how frequently a Subversion repository is backed up. The Subversion extension recognizes the same collect modes as the standard Cedar Backup collect action (see Chapter2, Basic Concepts).

    This value is the collect mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Note: if your backup device does not suppport multisession discs, then you should probably use the daily collect mode to avoid losing data.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Default compress mode.

    Subversion repositories backups are just specially-formatted text files, and often compress quite well using gzip or bzip2. The compress mode describes how the backed-up data will be compressed, if at all.

    This value is the compress mode that will be used by default during the backup process. Individual repositories (below) may override this value. If all individual repositories provide their own value, then this default value may be omitted from configuration.

    Restrictions: Must be one of none, gzip or bzip2.

    repository

    A Subversion repository be collected.

    This is a subsection which contains information about a specific Subversion repository to be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    repository_dir

    A Subversion parent repository directory be collected.

    This is a subsection which contains information about a Subversion parent repository directory to be backed up. Any subdirectory immediately within this directory is assumed to be a Subversion repository, and will be backed up.

    This section can be repeated as many times as is necessary. At least one repository or repository directory must be configured.

    The repository_dir subsection contains the following fields:

    collect_mode

    Collect mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default collect mode.

    Restrictions: Must be one of daily, weekly or incr.

    compress_mode

    Compress mode for this repository.

    This field is optional. If it doesn't exist, the backup will use the default compress mode.

    Restrictions: Must be one of none, gzip or bzip2.

    abs_path

    Absolute path of the Subversion repository to back up.

    Restrictions: Must be an absolute path.

    exclude

    List of paths or patterns to exclude from the backup.

    This is a subsection which contains a set of paths and patterns to be excluded within this subversion parent directory.

    This section is entirely optional, and if it exists can also be empty.

    The exclude subsection can contain one or more of each of the following fields:

    rel_path

    A relative path to be excluded from the backup.

    The path is assumed to be relative to the subversion parent directory itself. For instance, if the configured subversion parent directory is /opt/svn a configured relative path of software would exclude the path /opt/svn/software.

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty.

    pattern

    A pattern to be excluded from the backup.

    The pattern must be a Python regular expression. [21] It is assumed to be bounded at front and back by the beginning and end of the string (i.e. it is treated as if it begins with ^ and ends with $).

    This field can be repeated as many times as is necessary.

    Restrictions: Must be non-empty

    CedarBackup2-2.26.5/doc/manual/ch03.html0000664000175000017500000000770112642035647021241 0ustar pronovicpronovic00000000000000Chapter3.Installation

    Chapter3.Installation

    Background

    There are two different ways to install Cedar Backup. The easiest way is to install the pre-built Debian packages. This method is painless and ensures that all of the correct dependencies are available, etc.

    If you are running a Linux distribution other than Debian or you are running some other platform like FreeBSD or Mac OS X, then you must use the Python source distribution to install Cedar Backup. When using this method, you need to manage all of the dependencies yourself.

    CedarBackup2-2.26.5/doc/manual/pr01.html0000664000175000017500000000512112642035647021260 0ustar pronovicpronovic00000000000000Preface

    Preface

    Purpose

    This software manual has been written to document version 2 of Cedar Backup, originally released in early 2005.

    CedarBackup2-2.26.5/doc/manual/apcs03.html0000664000175000017500000001247112642035647021575 0ustar pronovicpronovic00000000000000Recovering MySQL Data

    Recovering MySQL Data

    MySQL data is gathered by the Cedar Backup mysql extension. This extension always creates a full backup each time it runs. This wastes some space, but makes it easy to restore database data. The following procedure describes how to restore your MySQL database from the backup.

    Warning

    I am not a MySQL expert. I am providing this information for reference. I have tested these procedures on my own MySQL installation; however, I only have a single database for use by Bugzilla, and I may have misunderstood something with regard to restoring individual databases as a user other than root. If you have any doubts, test the procedure below before relying on it!

    MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and correct any part of this procedure.

    First, find the backup you are interested in. If you have specified all databases in configuration, you will have a single backup file, called mysqldump.txt. If you have specified individual databases in configuration, then you will have files with names like mysqldump-database.txt instead. In either case, your file might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration.

    If you are restoring an all databases backup, make sure that you have correctly created the root user and know its password. Then, execute:

    daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Because the database backup includes CREATE DATABASE SQL statements, this command should take care of creating all of the databases within the backup, as well as populating them.

    If you are restoring a backup for a specific database, you have two choices. If you have a root login, you can use the same command as above:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root
          

    Otherwise, you can create the database and its login first (or have someone create it) and then use a database-specific login to execute the restore:

    daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database
          

    Again, use zcat or just cat as appropriate.

    For more information on using MySQL, see the documentation on the MySQL web site, http://mysql.org/, or the manpages for the mysql and mysqldump commands.

    CedarBackup2-2.26.5/doc/manual/ape.html0000664000175000017500000004311612642035647021251 0ustar pronovicpronovic00000000000000AppendixE.Copyright

    AppendixE.Copyright

    
    Copyright (c) 2004-2011,2013-2015
    Kenneth J. Pronovici
    
    This work is free; you can redistribute it and/or modify it under
    the terms of the GNU General Public License (the "GPL"), Version 2,
    as published by the Free Software Foundation.
    
    For the purposes of the GPL, the "preferred form of modification"
    for this work is the original Docbook XML text files.  If you
    choose to distribute this work in a compiled form (i.e. if you
    distribute HTML, PDF or Postscript documents based on the original
    Docbook XML text files), you must also consider image files to be
    "source code" if those images are required in order to construct a
    complete and readable compiled version of the work.
    
    This work is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    
    Copies of the GNU General Public License are available from
    the Free Software Foundation website, http://www.gnu.org/.
    You may also write the Free Software Foundation, Inc., 
    51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA
    
    ====================================================================
    
    		    GNU GENERAL PUBLIC LICENSE
    		       Version 2, June 1991
    
     Copyright (C) 1989, 1991 Free Software Foundation, Inc.
         51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA
     Everyone is permitted to copy and distribute verbatim copies
     of this license document, but changing it is not allowed.
    
    			    Preamble
    
      The licenses for most software are designed to take away your
    freedom to share and change it.  By contrast, the GNU General Public
    License is intended to guarantee your freedom to share and change free
    software--to make sure the software is free for all its users.  This
    General Public License applies to most of the Free Software
    Foundation's software and to any other program whose authors commit to
    using it.  (Some other Free Software Foundation software is covered by
    the GNU Library General Public License instead.)  You can apply it to
    your programs, too.
    
      When we speak of free software, we are referring to freedom, not
    price.  Our General Public Licenses are designed to make sure that you
    have the freedom to distribute copies of free software (and charge for
    this service if you wish), that you receive source code or can get it
    if you want it, that you can change the software or use pieces of it
    in new free programs; and that you know you can do these things.
    
      To protect your rights, we need to make restrictions that forbid
    anyone to deny you these rights or to ask you to surrender the rights.
    These restrictions translate to certain responsibilities for you if you
    distribute copies of the software, or if you modify it.
    
      For example, if you distribute copies of such a program, whether
    gratis or for a fee, you must give the recipients all the rights that
    you have.  You must make sure that they, too, receive or can get the
    source code.  And you must show them these terms so they know their
    rights.
    
      We protect your rights with two steps: (1) copyright the software, and
    (2) offer you this license which gives you legal permission to copy,
    distribute and/or modify the software.
    
      Also, for each author's protection and ours, we want to make certain
    that everyone understands that there is no warranty for this free
    software.  If the software is modified by someone else and passed on, we
    want its recipients to know that what they have is not the original, so
    that any problems introduced by others will not reflect on the original
    authors' reputations.
    
      Finally, any free program is threatened constantly by software
    patents.  We wish to avoid the danger that redistributors of a free
    program will individually obtain patent licenses, in effect making the
    program proprietary.  To prevent this, we have made it clear that any
    patent must be licensed for everyone's free use or not licensed at all.
    
      The precise terms and conditions for copying, distribution and
    modification follow.
    
    		    GNU GENERAL PUBLIC LICENSE
       TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
    
      0. This License applies to any program or other work which contains
    a notice placed by the copyright holder saying it may be distributed
    under the terms of this General Public License.  The "Program", below,
    refers to any such program or work, and a "work based on the Program"
    means either the Program or any derivative work under copyright law:
    that is to say, a work containing the Program or a portion of it,
    either verbatim or with modifications and/or translated into another
    language.  (Hereinafter, translation is included without limitation in
    the term "modification".)  Each licensee is addressed as "you".
    
    Activities other than copying, distribution and modification are not
    covered by this License; they are outside its scope.  The act of
    running the Program is not restricted, and the output from the Program
    is covered only if its contents constitute a work based on the
    Program (independent of having been made by running the Program).
    Whether that is true depends on what the Program does.
    
      1. You may copy and distribute verbatim copies of the Program's
    source code as you receive it, in any medium, provided that you
    conspicuously and appropriately publish on each copy an appropriate
    copyright notice and disclaimer of warranty; keep intact all the
    notices that refer to this License and to the absence of any warranty;
    and give any other recipients of the Program a copy of this License
    along with the Program.
    
    You may charge a fee for the physical act of transferring a copy, and
    you may at your option offer warranty protection in exchange for a fee.
    
      2. You may modify your copy or copies of the Program or any portion
    of it, thus forming a work based on the Program, and copy and
    distribute such modifications or work under the terms of Section 1
    above, provided that you also meet all of these conditions:
    
        a) You must cause the modified files to carry prominent notices
        stating that you changed the files and the date of any change.
    
        b) You must cause any work that you distribute or publish, that in
        whole or in part contains or is derived from the Program or any
        part thereof, to be licensed as a whole at no charge to all third
        parties under the terms of this License.
    
        c) If the modified program normally reads commands interactively
        when run, you must cause it, when started running for such
        interactive use in the most ordinary way, to print or display an
        announcement including an appropriate copyright notice and a
        notice that there is no warranty (or else, saying that you provide
        a warranty) and that users may redistribute the program under
        these conditions, and telling the user how to view a copy of this
        License.  (Exception: if the Program itself is interactive but
        does not normally print such an announcement, your work based on
        the Program is not required to print an announcement.)
    
    These requirements apply to the modified work as a whole.  If
    identifiable sections of that work are not derived from the Program,
    and can be reasonably considered independent and separate works in
    themselves, then this License, and its terms, do not apply to those
    sections when you distribute them as separate works.  But when you
    distribute the same sections as part of a whole which is a work based
    on the Program, the distribution of the whole must be on the terms of
    this License, whose permissions for other licensees extend to the
    entire whole, and thus to each and every part regardless of who wrote it.
    
    Thus, it is not the intent of this section to claim rights or contest
    your rights to work written entirely by you; rather, the intent is to
    exercise the right to control the distribution of derivative or
    collective works based on the Program.
    
    In addition, mere aggregation of another work not based on the Program
    with the Program (or with a work based on the Program) on a volume of
    a storage or distribution medium does not bring the other work under
    the scope of this License.
    
      3. You may copy and distribute the Program (or a work based on it,
    under Section 2) in object code or executable form under the terms of
    Sections 1 and 2 above provided that you also do one of the following:
    
        a) Accompany it with the complete corresponding machine-readable
        source code, which must be distributed under the terms of Sections
        1 and 2 above on a medium customarily used for software interchange; or,
    
        b) Accompany it with a written offer, valid for at least three
        years, to give any third party, for a charge no more than your
        cost of physically performing source distribution, a complete
        machine-readable copy of the corresponding source code, to be
        distributed under the terms of Sections 1 and 2 above on a medium
        customarily used for software interchange; or,
    
        c) Accompany it with the information you received as to the offer
        to distribute corresponding source code.  (This alternative is
        allowed only for noncommercial distribution and only if you
        received the program in object code or executable form with such
        an offer, in accord with Subsection b above.)
    
    The source code for a work means the preferred form of the work for
    making modifications to it.  For an executable work, complete source
    code means all the source code for all modules it contains, plus any
    associated interface definition files, plus the scripts used to
    control compilation and installation of the executable.  However, as a
    special exception, the source code distributed need not include
    anything that is normally distributed (in either source or binary
    form) with the major components (compiler, kernel, and so on) of the
    operating system on which the executable runs, unless that component
    itself accompanies the executable.
    
    If distribution of executable or object code is made by offering
    access to copy from a designated place, then offering equivalent
    access to copy the source code from the same place counts as
    distribution of the source code, even though third parties are not
    compelled to copy the source along with the object code.
    
      4. You may not copy, modify, sublicense, or distribute the Program
    except as expressly provided under this License.  Any attempt
    otherwise to copy, modify, sublicense or distribute the Program is
    void, and will automatically terminate your rights under this License.
    However, parties who have received copies, or rights, from you under
    this License will not have their licenses terminated so long as such
    parties remain in full compliance.
    
      5. You are not required to accept this License, since you have not
    signed it.  However, nothing else grants you permission to modify or
    distribute the Program or its derivative works.  These actions are
    prohibited by law if you do not accept this License.  Therefore, by
    modifying or distributing the Program (or any work based on the
    Program), you indicate your acceptance of this License to do so, and
    all its terms and conditions for copying, distributing or modifying
    the Program or works based on it.
    
      6. Each time you redistribute the Program (or any work based on the
    Program), the recipient automatically receives a license from the
    original licensor to copy, distribute or modify the Program subject to
    these terms and conditions.  You may not impose any further
    restrictions on the recipients' exercise of the rights granted herein.
    You are not responsible for enforcing compliance by third parties to
    this License.
    
      7. If, as a consequence of a court judgment or allegation of patent
    infringement or for any other reason (not limited to patent issues),
    conditions are imposed on you (whether by court order, agreement or
    otherwise) that contradict the conditions of this License, they do not
    excuse you from the conditions of this License.  If you cannot
    distribute so as to satisfy simultaneously your obligations under this
    License and any other pertinent obligations, then as a consequence you
    may not distribute the Program at all.  For example, if a patent
    license would not permit royalty-free redistribution of the Program by
    all those who receive copies directly or indirectly through you, then
    the only way you could satisfy both it and this License would be to
    refrain entirely from distribution of the Program.
    
    If any portion of this section is held invalid or unenforceable under
    any particular circumstance, the balance of the section is intended to
    apply and the section as a whole is intended to apply in other
    circumstances.
    
    It is not the purpose of this section to induce you to infringe any
    patents or other property right claims or to contest validity of any
    such claims; this section has the sole purpose of protecting the
    integrity of the free software distribution system, which is
    implemented by public license practices.  Many people have made
    generous contributions to the wide range of software distributed
    through that system in reliance on consistent application of that
    system; it is up to the author/donor to decide if he or she is willing
    to distribute software through any other system and a licensee cannot
    impose that choice.
    
    This section is intended to make thoroughly clear what is believed to
    be a consequence of the rest of this License.
    
      8. If the distribution and/or use of the Program is restricted in
    certain countries either by patents or by copyrighted interfaces, the
    original copyright holder who places the Program under this License
    may add an explicit geographical distribution limitation excluding
    those countries, so that distribution is permitted only in or among
    countries not thus excluded.  In such case, this License incorporates
    the limitation as if written in the body of this License.
    
      9. The Free Software Foundation may publish revised and/or new versions
    of the General Public License from time to time.  Such new versions will
    be similar in spirit to the present version, but may differ in detail to
    address new problems or concerns.
    
    Each version is given a distinguishing version number.  If the Program
    specifies a version number of this License which applies to it and "any
    later version", you have the option of following the terms and conditions
    either of that version or of any later version published by the Free
    Software Foundation.  If the Program does not specify a version number of
    this License, you may choose any version ever published by the Free Software
    Foundation.
    
      10. If you wish to incorporate parts of the Program into other free
    programs whose distribution conditions are different, write to the author
    to ask for permission.  For software which is copyrighted by the Free
    Software Foundation, write to the Free Software Foundation; we sometimes
    make exceptions for this.  Our decision will be guided by the two goals
    of preserving the free status of all derivatives of our free software and
    of promoting the sharing and reuse of software generally.
    
    			    NO WARRANTY
    
      11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
    FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
    OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
    PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
    OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
    MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
    TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
    PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
    REPAIR OR CORRECTION.
    
      12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
    WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
    REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
    INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
    OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
    TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
    YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
    PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGES.
    
    		     END OF TERMS AND CONDITIONS
    
    ====================================================================
    
          
    CedarBackup2-2.26.5/doc/manual/apc.html0000664000175000017500000002334612642035647021252 0ustar pronovicpronovic00000000000000AppendixC.Data Recovery

    AppendixC.Data Recovery

    Finding your Data

    The first step in data recovery is finding the data that you want to recover. You need to decide whether you are going to to restore off backup media, or out of some existing staging data that has not yet been purged. The only difference is, if you purge staging data less frequently than once per week, you might have some data available in the staging directories which would not be found on your backup media, depending on how you rotate your media. (And of course, if your system is trashed or stolen, you probably will not have access to your old staging data in any case.)

    Regardless of the data source you choose, you will find the data organized in the same way. The remainder of these examples will work off an example backup disc, but the contents of the staging directory will look pretty much like the contents of the disc, with data organized first by date and then by backup peer name.

    This is the root directory of my example disc:

    root:/mnt/cdrw# ls -l
    total 4
    drwxr-x---  3 backup backup 4096 Sep 01 06:30 2005/
          

    In this root directory is one subdirectory for each year represented in the backup. In this example, the backup represents data entirely from the year 2005. If your configured backup week happens to span a year boundary, there would be two subdirectories here (for example, one for 2005 and one for 2006).

    Within each year directory is one subdirectory for each month represented in the backup.

    root:/mnt/cdrw/2005# ls -l
    total 2
    dr-xr-xr-x  6 root root 2048 Sep 11 05:30 09/
          

    In this example, the backup represents data entirely from the month of September, 2005. If your configured backup week happens to span a month boundary, there would be two subdirectories here (for example, one for August 2005 and one for September 2005).

    Within each month directory is one subdirectory for each day represented in the backup.

    root:/mnt/cdrw/2005/09# ls -l
    total 8
    dr-xr-xr-x  5 root root 2048 Sep  7 05:30 07/
    dr-xr-xr-x  5 root root 2048 Sep  8 05:30 08/
    dr-xr-xr-x  5 root root 2048 Sep  9 05:30 09/
    dr-xr-xr-x  5 root root 2048 Sep 11 05:30 11/
          

    Depending on how far into the week your backup media is from, you might have as few as one daily directory in here, or as many as seven.

    Within each daily directory is a stage indicator (indicating when the directory was staged) and one directory for each peer configured in the backup:

    root:/mnt/cdrw/2005/09/07# ls -l
    total 10
    dr-xr-xr-x  2 root root 2048 Sep  7 02:31 host1/
    -r--r--r--  1 root root    0 Sep  7 03:27 cback.stage
    dr-xr-xr-x  2 root root 4096 Sep  7 02:30 host2/
    dr-xr-xr-x  2 root root 4096 Sep  7 03:23 host3/
          

    In this case, you can see that my backup includes three machines, and that the backup data was staged on September 7, 2005 at 03:27.

    Within the directory for a given host are all of the files collected on that host. This might just include tarfiles from a normal Cedar Backup collect run, and might also include files collected from Cedar Backup extensions or by other third-party processes on your system.

    root:/mnt/cdrw/2005/09/07/host1# ls -l
    total 157976
    -r--r--r--  1 root root 11206159 Sep  7 02:30 boot.tar.bz2
    -r--r--r--  1 root root        0 Sep  7 02:30 cback.collect
    -r--r--r--  1 root root     3199 Sep  7 02:30 dpkg-selections.txt.bz2
    -r--r--r--  1 root root   908325 Sep  7 02:30 etc.tar.bz2
    -r--r--r--  1 root root      389 Sep  7 02:30 fdisk-l.txt.bz2
    -r--r--r--  1 root root  1003100 Sep  7 02:30 ls-laR.txt.bz2
    -r--r--r--  1 root root    19800 Sep  7 02:30 mysqldump.txt.bz2
    -r--r--r--  1 root root  4133372 Sep  7 02:30 opt-local.tar.bz2
    -r--r--r--  1 root root 44794124 Sep  8 23:34 opt-public.tar.bz2
    -r--r--r--  1 root root 30028057 Sep  7 02:30 root.tar.bz2
    -r--r--r--  1 root root  4747070 Sep  7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2
    -r--r--r--  1 root root   603863 Sep  7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2
    -r--r--r--  1 root root   113484 Sep  7 02:30 var-lib-jspwiki.tar.bz2
    -r--r--r--  1 root root 19556660 Sep  7 02:30 var-log.tar.bz2
    -r--r--r--  1 root root 14753855 Sep  7 02:30 var-mail.tar.bz2
             

    As you can see, I back up variety of different things on host1. I run the normal collect action, as well as the sysinfo, mysql and subversion extensions. The resulting backup files are named in a way that makes it easy to determine what they represent.

    Files of the form *.tar.bz2 represent directories backed up by the collect action. The first part of the name (before .tar.bz2), represents the path to the directory. For example, boot.tar.gz contains data from /boot, and var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki.

    The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are produced by the sysinfo extension.

    The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a system-wide database dump, because I use the all flag in configuration. If I were to configure Cedar Backup to dump individual datbases, then the filename would contain the database name (something like mysqldump-bugs.txt.bz2).

    Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion extension. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    CedarBackup2-2.26.5/doc/manual/images/0002775000175000017500000000000012642035650021052 5ustar pronovicpronovic00000000000000CedarBackup2-2.26.5/doc/manual/images/note.png0000664000175000017500000000317212642035647022534 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @QAb @QAϛR*Aw0E |x7EeukWmxV`Io$@Q `Lʔ*q"FaSxt%n rO2[22 2 K&Éc0@ `y$:CGW25MJr +É^2@ Pyuފ @7Wn4IODMqknzŸq. ?4='1=)'AaM7] i1 vRiGJ7JzzABz N7/3Y2tVnBNOi21q@D8tM7AJO'"ߏ 0l ˡw>W Ci6(.ߝ!ć{#Datׯ ,%]I68(<G_O -y!.{3 7e1@Kk`N7@'$HNO@Sk.p9$  ux.8=e3h3&"=A&5!ěS{},pd@ˀrH JO'HP򦰸 WADB.NO<I7"7 i}{`tL4=)bIOt .o n' "yj (Ptd@A)zHYD8=M9,<;;\ 2 Al$Mj>Ό.z nSh'"TyHJ7 (~׍3e3ph- 0sk+ۙ᫧DoMx)IOh 9_ )u!YIENDB`CedarBackup2-2.26.5/doc/manual/images/warning.png0000664000175000017500000000301612642035647023231 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :93g-YrjR(8YT͎="AD>]x=7nV֝ cCx:w_I uЗW/EGt..0vQ-3gHua?vBv]v8 8ݻBBs 2OHYo#@sǓ'o) ?}c)@tS kdxWn !v Mp@aq?.R+LLAA⎦M||ô ͽ)";ǯ_9 t9"D{00lCKL/WagO>oFv@8Fn6zHEr@!۷# T/x?}; (01AkjZ@;s B8ؾGUU&&^MH JJ:me sr.3 B8n}=$}p-9?-20c`+*;D :|;A@AOfKכJCB|  .GFA3f!͛)BrPTԀ8NO] ӗ.Ah Ap͂D.KI͛\] z8a$@ MO >|d'Zt-3;v76|HI}a2/ojZګ7o  }}\\~-"; ,ĪZZ~:nR۟qvQ;Wm,tt\ w@8),g`XsA-ee@10d)(dUd@@MXdmb*d~qĄnvPZTJʓǏ@XA/% @Ӗ50w9mܴ vs>a tss;XY  0Z 6rs>}i5@l;w㛊atMV@LEXpWEDledKSf1OtqY @a > u[Y%ee]8DxHӯ_Νl.hZ v΍>o@5 ~?~R^D96Ap `Z^əhoilŊ/_ H~={: Z河7ё{u*_xkbNάsܹC-D߿/޹|ƺd'x S3X3@kk_7={ eգO^ͻvmٽuĉ;}BꮓhuРs@OoCIENDB`CedarBackup2-2.26.5/doc/manual/images/info.png0000664000175000017500000000443312642035647022523 0ustar pronovicpronovic00000000000000PNG  IHDR00`ngAMAXG cHRMz%u._:oi+IDATxb?@1 @ :߿~ @Aǻw˗/߿O`d^TŋgϞm~@@e@@-@@D9Q``M\o۶=8jms`tv+v=yo߾B?@@v0eHB7o~:ZP^Qe f30DB@|1偁{nPqJ:"7o@Ç_, DZ0aK/)(8GF'5 0w-i0w,@Bs448ceH@ų!<~ɓ'x@8t۷o!쪪`:?o(/YpBS8D0߿^ `d~?(+.>I'O"kVln\ǀݽ{̱Z @Xvn pt (x<=wcj8 ւh,ܹs L:O>dᄒ8 +&fe f*,AzsFF1cѣ/z]^~5^]ƞ= g^x⠿Sܟ?@wsu \BSښ}_Gtvށ-YAAƺuEAՁ={!3T&"\\|l9Kв/N\1Mf-j֯III  $Ϟ}>!?`χw" `҂r=;""J[J_-j9sfcc#: M3`]la;19Ͷrrr,Y7 Pj{`u<:!J4gP|]zʕ+pA*1&$K(+Twwwgdd  5`2/ $RS#YCnnnC6 L>FO78޽Ϩ7 *}6 kх VcM _]}- drبXl11-Č,@c*O>mkk LJE  ,g+x[ n<R__Ns$%W ^s6P%@ڲ1x o|{f˯@Vξ=@7A CX mZŵo1//Nól@p״ڵ o8ء޿?$= “0%y\ -%?8 t-ZLX- C>|<3yY)y1.Np )3~Ν;22 #hXۺuM A.~ì&&sp,# 9TTQㇹs:::Ĝ8quD#L6mƍ^!:P͛n! E  HFߖ-[mۮ]ûІ/~;TWWwa`K-D85,e˖֦AAAPXY֬Y)7 w9rdՋaؐٳgMR" РA TmGIENDB`CedarBackup2-2.26.5/doc/manual/ch02s05.html0000664000175000017500000000542212642035647021566 0ustar pronovicpronovic00000000000000Coordination between Master and Clients

    Coordination between Master and Clients

    Unless you are using Cedar Backup to manage a pool of one, you will need to set up some coordination between your clients and master to make everything work properly. This coordination isn't difficult — it mostly consists of making sure that operations happen in the right order — but some users are suprised that it is required and want to know why Cedar Backup can't just take care of it for me.

    Essentially, each client must finish collecting all of its data before the master begins staging it, and the master must finish staging data from a client before that client purges its collected data. Administrators may need to experiment with the time between the collect and purge entries so that the master has enough time to stage data before it is purged.

    CedarBackup2-2.26.5/doc/manual/apcs04.html0000664000175000017500000001423212642035647021573 0ustar pronovicpronovic00000000000000Recovering Subversion Data

    Recovering Subversion Data

    Subversion data is gathered by the Cedar Backup subversion extension. Cedar Backup will create either full or incremental backups, but the procedure for restoring is the same for both. Subversion backups are always taken on a per-repository basis. If you need to restore more than one repository, follow the procedures below for each repository you are interested in.

    First, find the backup or backups you are interested in. Typically, you will need the full backup from the first day of the week and each incremental backup from the other days of the week.

    The subversion extension creates files of the form svndump-*.txt. These files might have a .gz or .bz2 extension depending on what kind of compression you specified in configuration. There is one dump file for each configured repository, and the dump file name represents the name of the repository and the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell that this file contains a full backup of the repository to this point, because the starting revision is zero. Later incremental backups would have a non-zero starting revision, i.e. perhaps 783-785, followed by 786-800, etc.

    Next, if you still have the old Subversion repository around, you might want to just move it off (rename the top-level directory) before executing the restore. Or, you can restore into a temporary directory and rename it later to its real name once you've checked it out. That is what my example below will show.

    Next, you need to create a new Subversion repository to hold the restored data. This example shows an FSFS repository, but that is an arbitrary choice. You can restore from an FSFS backup into a FSFS repository or a BDB repository. The Subversion dump format is backend-agnostic.

    root:/tmp# svnadmin create --fs-type=fsfs testrepo
          

    Next, load the full backup into the repository:

    root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Of course, use zcat or just cat, depending on what kind of compression is in use.

    Follow that with loads for each of the incremental backups:

    root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
    root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
          

    Again, use zcat or just cat as appropriate.

    When this is done, your repository will be restored to the point of the last commit indicated in the svndump file (in this case, to revision 800).

    Note

    Note: don't be surprised if, when you test this, the restored directory doesn't have exactly the same contents as the original directory. I can't explain why this happens, but if you execute svnadmin dump on both old and new repositories, the results are identical. This means that the repositories do contain the same content.

    For more information on using Subversion, see the book Version Control with Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http://subversion.tigris.org/faq.html).

    CedarBackup2-2.26.5/doc/manual/ch05s06.html0000664000175000017500000002702712642035647021577 0ustar pronovicpronovic00000000000000Configuring your Writer Device

    Configuring your Writer Device

    Device Types

    In order to execute the store action, you need to know how to identify your writer device. Cedar Backup supports two kinds of device types: CD writers and DVD writers. DVD writers are always referenced through a filesystem device name (i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or through a filesystem device name. Which you use depends on your operating system and hardware.

    Devices identified by by device name

    For all DVD writers, and for CD writers on certain platforms, you will configure your writer device using only a device name. If your writer device works this way, you should just specify <target_device> in configuration. You can either leave <target_scsi_id> blank or remove it completely. The writer device will be used both to write to the device and for filesystem operations — for instance, when the media needs to be mounted to run the consistency check.

    Devices identified by SCSI id

    Cedar Backup can use devices identified by SCSI id only when configured to use the cdwriter device type.

    In order to use a SCSI device with Cedar Backup, you must know both the SCSI id <target_scsi_id> and the device name <target_device>. The SCSI id will be used to write to media using cdrecord; and the device name will be used for other filesystem operations.

    A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2). This should hold true on most UNIX-like systems including Linux and the various BSDs (although I do not have a BSD system to test with currently). The SCSI address represents the location of your writer device on the one or more SCSI buses that you have available on your system.

    On some platforms, it is possible to reference non-SCSI writer devices (i.e. an IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI writer device to have an emulated SCSI id, provide the filesystem device path in <target_device> and the SCSI id in <target_scsi_id>, just like for a real SCSI device.

    You should note that in some cases, an emulated SCSI id takes the same form as a normal SCSI id, while in other cases you might see a method name prepended to the normal SCSI id (i.e. ATA:1,1,1).

    Linux Notes

    On a Linux system, IDE writer devices often have a emulated SCSI address, which allows SCSI-based software to access the device through an IDE-to-SCSI interface. Under these circumstances, the first IDE writer device typically has an address 0,0,0. However, support for the IDE-to-SCSI interface has been deprecated and is not well-supported in newer kernels (kernel 2.6.x and later).

    Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by prepending a method indicator to the emulated device address. For instance, ATA:0,0,0 or ATAPI:0,0,0 are typical values.

    However, even this interface is deprecated as of late 2006, so with relatively new kernels you may be better off using the filesystem device path directly rather than relying on any SCSI emulation.

    Finding your Linux CD Writer

    Here are some hints about how to find your Linux CD writer hardware. First, try to reference your device using the filesystem device path:

    cdrecord -prcap dev=/dev/cdrom
             

    Running this command on my hardware gives output that looks like this (just the top few lines):

    Device type    : Removable CD-ROM
    Version        : 0
    Response Format: 2
    Capabilities   : 
    Vendor_info    : 'LITE-ON '
    Identification : 'DVDRW SOHW-1673S'
    Revision       : 'JS02'
    Device seems to be: Generic mmc2 DVD-R/DVD-RW.
    
    Drive capabilities, per MMC-3 page 2A:
             

    If this works, and the identifying information at the top of the output looks like your CD writer device, you've probably found a working configuration. Place the device path into <target_device> and leave <target_scsi_id> blank.

    If this doesn't work, you should try to find an ATA or ATAPI device:

    cdrecord -scanbus dev=ATA
    cdrecord -scanbus dev=ATAPI
             

    On my development system, I get a result that looks something like this for ATA:

    scsibus1:
            1,0,0   100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM
            1,1,0   101) *
            1,2,0   102) *
            1,3,0   103) *
            1,4,0   104) *
            1,5,0   105) *
            1,6,0   106) *
            1,7,0   107) *
             

    Again, if you get a result that you recognize, you have again probably found a working configuraton. Place the associated device path (in my case, /dev/cdrom) into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0) into <target_scsi_id>.

    Any further discussion of how to configure your CD writer hardware is outside the scope of this document. If you have tried the hints above and still can't get things working, you may want to reference the Linux CDROM HOWTO (http://www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/HOWTO/ATA-RAID-HOWTO/index.html) for more information.

    Mac OS X Notes

    On a Mac OS X (darwin) system, things get strange. Apple has abandoned traditional SCSI device identifiers in favor of a system-wide resource id. So, on a Mac, your writer device will have a name something like IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If you have multiple drives, the second drive probably has a number appended, i.e. IODVDServices/2 for the second DVD writer. You can try to figure out what the name of your device is by grepping through the output of the command ioreg -l.[24]

    Unfortunately, even if you can figure out what device to use, I can't really support the store action on this platform. In OS X, the automount function of the Finder interferes significantly with Cedar Backup's ability to mount and unmount media and write to the CD or DVD hardware. The Cedar Backup writer and image functionality does work on this platform, but the effort required to fight the operating system about who owns the media and the device makes it nearly impossible to execute the store action successfully.

    If you are interested in some of my notes about what works and what doesn't on this platform, check out the documentation in the doc/osx directory in the source distribution.



    [24] Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree for this information

    CedarBackup2-2.26.5/doc/cback.10000664000175000017500000002733412556155010017461 0ustar pronovicpronovic00000000000000.\" vim: set ft=nroff .\" .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # C E D A R .\" # S O L U T I O N S "Software done right." .\" # S O F T W A R E .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # Author : Kenneth J. Pronovici .\" # Language : nroff .\" # Project : Cedar Backup, release 2 .\" # Purpose : Manpage for cback script .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" .TH cback "1" "July 2015" "Cedar Backup 2" "Kenneth J. Pronovici" .SH NAME cback \- Local and remote backups to CD or DVD media or Amazon S3 storage .SH SYNOPSIS .B cback [\fIswitches\fR] action(s) .SH DESCRIPTION .PP The cback script provides the command\-line interface for Cedar Backup 2. Cedar Backup 2 is a software package designed to manage system backups for a pool of local and remote machines. It understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. .PP Cedar Backup 2 is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. .PP Alternately, Cedar Backup 2 can write your backups to the Amazon S3 cloud rather than relying on physical media. .SH BACKUP CONCEPTS .PP There are two kinds of machines in a Cedar Backup pool. One machine (the \fImaster\fR) has a CD or DVD writer on it and is where the backup is written to disc. The others (\fIclients\fR) collect data to be written to disc by the master. Collectively, the master and client machines in a pool are all referred to as \fIpeer\fR machines. There are four actions that take place as part of the backup process: \fIcollect\fR, \fIstage\fR, \fIstore\fR and \fIpurge\fR. Both the master and the clients execute the collect and purge actions, but only the master executes the stage and store actions. The configuration file \fI/etc/cback.conf\fR controls the actions taken during the collect, stage, store and purge actions. .PP Cedar Backup also supports the concept of \fImanaged clients\fR. Managed clients have their entire backup process managed by the master via a remote shell. The same actions are run as part of the backup process, but the master controls when the actions are executed on the clients rather than the clients controlling it for themselves. This facility is intended for use in environments where a scheduler like cron is not available. .SH MIGRATING FROM VERSION 2 TO VERSION 3 .PP The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. Cedar Backup version 2 was designed for Python 2, while version 3 is a conversion of the original code to Python 3. Other than that, both versions are functionally equivalent. The configuration format is unchanged, and you can mix\-and\-match masters and clients of different versions in the same backup pool. Both versions will be fully supported until around the time of the Python 2 end\-of\-life in 2020, but you should plan to migrate sooner than that if possible. .PP A major design goal for version 3 was to facilitate easy migration testing for users, by making it possible to install version 3 on the same server where version 2 was already in use. A side effect of this design choice is that all of the executables, configuration files, and logs changed names in version 3. Where version 2 used \fIcback\fR, version 3 uses \fIcback3\fR: \fIcback3.conf\fR instead of \fIcback.conf\fR, \fIcback3.log\fR instead of \fIcback.log\fR, etc. .PP So, while migrating from version 2 to version 3 is relatively straightforward, you will have to make some changes manually. You will need to create a new configuration file (or soft link to the old one), modify your cron jobs to use the new executable name, etc. You can migrate one server at a time in your pool with no ill effects, or even incrementally migrate a single server by using version 2 and version 3 on different days of the week or for different parts of the backup. .SH SWITCHES .TP \fB\-h\fR, \fB\-\-help\fR Display usage/help listing. .TP \fB\-V\fR, \fB\-\-version\fR Display version information. .TP \fB\-b\fR, \fB\-\-verbose\fR Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. .TP \fB\-q\fR, \fB\-\-quiet\fR Run quietly (display no output to the screen). .TP \fB\-c\fR, \fB\-\-config\fR Specify the path to an alternate configuration file. The default configuration file is \fI/etc/cback.conf\fR. .TP \fB\-f\fR, \fB\-\-full\fR Perform a full backup, regardless of configuration. For the collect action, this means that any existing information related to incremental backups will be ignored and rewritten; for the store action, this means that a new disc will be started. .TP \fB\-M\fR, \fB\-\-managed\fR Include managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client after executing the action locally. .TP \fB\-N\fR, \fB\-\-managed-only\fR Include only managed clients when executing actions. If the action being executed is listed as a managed action for a managed client, execute the action on that client, but do not execute the action locally. .TP \fB\-l\fR, \fB\-\-logfile\fR Specify the path to an alternate logfile. The default logfile file is \fI/var/log/cback.log\fR. .TP \fB\-o\fR, \fB\-\-owner\fR Specify the ownership of the logfile, in the form user:group. The default ownership is \fIroot:adm\fR, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. .TP \fB\-m\fR, \fB\-\-mode\fR Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is \fI640\fR (\-rw\-r\-\-\-\-\-). This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. .TP \fB\-O\fR, \fB\-\-output\fR Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. .TP \fB\-d\fR, \fB\-\-debug\fR Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the \-\-output option, as well. .TP \fB\-s\fR, \fB\-\-stack\fR Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just progating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. .TP \fB\-D\fR, \fB\-\-diagnostics\fR Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. .SH ACTIONS .TP \fBall\fR Take all normal actions (collect, stage, store, purge), in that order. .TP \fBcollect\fR Take the collect action, creating tarfiles for each directory specified in the collect section of the configuration file. .TP \fBstage\fR Take the stage action, copying tarfiles from each peer in the backup pool to the daily staging directory, based on the stage section of the configuration file. .TP \fBstore\fR Take the store action, writing the daily staging directory to disc based on the store section of the configuration file. .TP \fBpurge\fR Take the purge action, removing old and outdated files as specified in the purge section of the configuration file. .TP \fBrebuild\fR The rebuild action attempts to rebuild "this week's" disc from any remaining unpurged staging directories. Typically, it is used to make a copy of a backup, replace lost or damaged media, or to switch to new media mid-week for some other reason. .TP \fBvalidate\fR Ensure that configuration is valid, but take no other action. Validation checks that the configuration file can be found and can be parsed, and also checks for typical configuration problems, such as directories that are not writable or problems with the target SCSI device. .SH RETURN VALUES .PP Cedar Backup returns 0 (zero) upon normal completion, and six other error codes related to particular errors. .TP \fB1\fR The Python interpreter version is < 2.7. .TP \fB2\fR Error processing command\-line arguments. .TP \fB3\fR Error configuring logging. .TP \fB4\fR Error parsing indicated configuration file. .TP \fB5\fR Backup was interrupted with a CTRL\-C or similar. .TP \fB6\fR Error executing specified backup actions. .SH NOTES .PP The script is designed to run as root, since otherwise it's difficult to back up system directories or write the CD or DVD device. However, pains are taken to switch to a backup user (specified in configuration) when appropriate. .PP To use the script, you must specify at least one action to take. More than one of the "collect", "stage", "store" or "purge" actions may be specified, in any arbitrary order. The "all", "rebuild" or "validate" actions may not be combined with other actions. If more than one action is specified, then actions will be taken in a sensible order (generally collect, followed by stage, followed by store, followed by purge). .PP If you have configured any Cedar Backup extensions, then the actions associated with those extensions may also be specified on the command line. If you specify any other actions along with an extended action, the actions will be executed in a sensible order per configuration. However, the "all" action never executes extended actions. .PP Note that there is no facility for restoring backups. It is assumed that the user can deal with copying tarfiles off disc and using them to restore missing files as needed. The user manual provides detailed intructions in Appendix C. .PP Finally, you should be aware that backups to CD or DVD can probably be read by any user which has permissions to mount the CD or DVD drive. If you intend to leave the backup disc in the drive at all times, you may want to consider this when setting up device permissions on your machine. You might also want to investigate the encrypt extension. .SH FILES .TP \fI/etc/cback.conf\fR - Default configuration file .TP \fI/var/log/cback.log\fR - Default log file .SH URLS .TP The project homepage is: \fIhttps://bitbucket.org/cedarsolutions/cedar\-backup2\fR .SH BUGS .PP There probably are bugs in this code. However, it is in active use for my own backups, and I fix problems as I notice them. If you find a bug, please report it. .PP If possible, give me the output from \-\-diagnostics, all of the error messages that the script printed into its log, and also any stack\-traces (exceptions) that Python printed. It would be even better if you could tell me how to reproduce the problem, for instance by sending me your configuration file. .PP Report bugs to or by using the BitBucket issue tracker. .SH AUTHOR Written and maintained by Kenneth J. Pronovici with contributions from others. .SH COPYRIGHT Copyright (c) 2004\-2011,2013\-2015 Kenneth J. Pronovici. .PP This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. CedarBackup2-2.26.5/doc/cback-amazons3-sync.10000664000175000017500000001402212556155014022156 0ustar pronovicpronovic00000000000000.\" vim: set ft=nroff .\" .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # C E D A R .\" # S O L U T I O N S "Software done right." .\" # S O F T W A R E .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # Author : Kenneth J. Pronovici .\" # Language : nroff .\" # Project : Cedar Backup, release 2 .\" # Purpose : Manpage for cback-amazons3-sync script .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" .TH cback\-amazons3-sync "1" "July 2015" "Cedar Backup 2" "Kenneth J. Pronovici" .SH NAME cback\-amazons3-sync \- Synchronize a local directory with an Amazon S3 bucket .SH SYNOPSIS .B cback\-amazons3\-sync [\fIswitches\fR] sourceDir s3BucketUrl .SH DESCRIPTION .PP This is the Cedar Backup 2 Amazon S3 sync tool. It synchronizes a local directory to an Amazon S3 cloud storage bucket. After the sync is complete, a validation step is taken. An error is reported if the contents of the bucket do not match the source directory, or if the indicated size for any file differs. .PP Generally, one can run the cback\-amazons3\-sync command with no special switches. This will start it using the default Cedar Backup log file, etc. You only need to use the switches if you need to change the default behavior. .SH MIGRATING FROM VERSION 2 TO VERSION 3 .PP The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. For most users, migration should be straightforward. See the discussion found at cback(1) or reference the Cedar Backup user guide. .SH ARGUMENTS .TP \fBsourceDir\fR The source directory on a local disk. .TP \fBs3BucketUrl\fR The URL specifying the location of the Amazon S3 cloud storage bucket to synchronize with, like \fIs3://example.com\-backup/subdir\fR. .SH SWITCHES .TP \fB\-h\fR, \fB\-\-help\fR Display usage/help listing. .TP \fB\-V\fR, \fB\-\-version\fR Display version information. .TP \fB\-b\fR, \fB\-\-verbose\fR Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. .TP \fB\-l\fR, \fB\-\-logfile\fR Specify the path to an alternate logfile. The default logfile file is \fI/var/log/cback.log\fR. .TP \fB\-o\fR, \fB\-\-owner\fR Specify the ownership of the logfile, in the form user:group. The default ownership is \fIroot:adm\fR, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. .TP \fB\-m\fR, \fB\-\-mode\fR Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is \fI640\fR (\-rw\-r\-\-\-\-\-). This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. .TP \fB\-O\fR, \fB\-\-output\fR Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. .TP \fB\-d\fR, \fB\-\-debug\fR Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the \-\-output option, as well. .TP \fB\-s\fR, \fB\-\-stack\fR Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just progating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. .TP \fB\-D\fR, \fB\-\-diagnostics\fR Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. .SH RETURN VALUES .PP This command returns 0 (zero) upon normal completion, and several other error codes related to particular errors. .TP \fB1\fR The Python interpreter version is < 2.7. .TP \fB2\fR Error processing command\-line arguments. .TP \fB3\fR Error configuring logging. .TP \fB5\fR Backup was interrupted with a CTRL\-C or similar. .TP \fB6\fR Other error during processing. .SH NOTES .PP This tool is a wrapper over the Amazon AWS CLI interface found in the aws(1) command. Specifically, cback\-amazons3\-sync invokes "aws s3 sync" followed by "aws s3api list\-objects". .PP Cedar Backup itself is designed to run as root. However, cback\-amazons3\-sync can be run safely as any user that is configured to use the Amazon AWS CLI interface. The aws(1) command will be executed by the same user which is executing cback\-amazons3\-sync. .PP You must configure the AWS CLI interface to have a valid connection to Amazon S3 infrastructure before using cback\-amazons3\-sync. For more information about how to accomplish this, see the Cedar Backup user guide. .SH SEE ALSO cback(1) .SH FILES .TP \fI/var/log/cback.log\fR - Default log file .SH URLS .TP The project homepage is: \fIhttps://bitbucket.org/cedarsolutions/cedar\-backup2\fR .SH BUGS .PP If you find a bug, please report it. .PP If possible, give me the output from \-\-diagnostics, all of the error messages that the script printed into its log, and also any stack\-traces (exceptions) that Python printed. It would be even better if you could tell me how to reproduce the problem, for instance by sending me your configuration file. .PP Report bugs to or by using the BitBucket issue tracker. .SH AUTHOR Written and maintained by Kenneth J. Pronovici with contributions from others. .SH COPYRIGHT Copyright (c) 2004\-2011,2013\-2015 Kenneth J. Pronovici. .PP This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. CedarBackup2-2.26.5/doc/cback-span.10000664000175000017500000001347612556155047020434 0ustar pronovicpronovic00000000000000.\" vim: set ft=nroff .\" .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # C E D A R .\" # S O L U T I O N S "Software done right." .\" # S O F T W A R E .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" # .\" # Author : Kenneth J. Pronovici .\" # Language : nroff .\" # Project : Cedar Backup, release 2 .\" # Purpose : Manpage for cback-span script .\" # .\" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # .\" .TH cback\-span "1" "July 2015" "Cedar Backup 2" "Kenneth J. Pronovici" .SH NAME cback\-span \- Span staged data among multiple discs .SH SYNOPSIS .B cback\-span [\fIswitches\fR] .SH DESCRIPTION .PP This is the Cedar Backup 2 span tool. It is intended for use by people who back up more data than can fit on a single disc. It allows a user to split (span) staged data between more than one disc. It can't be a Cedar Backup extension in the usual sense because it requires user input when switching media. .PP Generally, one can run the cback\-span command with no arguments. This will start it using the default configuration file, the default log file, etc. You only need to use the switches if you need to change the default behavior. .PP This command takes most of its configuration from the Cedar Backup configuration file, specifically the store section. Then, more information is gathered from the user interactively while the command is running. .SH MIGRATING FROM VERSION 2 TO VERSION 3 .PP The main difference between Cedar Backup version 2 and Cedar Backup version 3 is the targeted Python interpreter. For most users, migration should be straightforward. See the discussion found at cback(1) or reference the Cedar Backup user guide. .SH SWITCHES .TP \fB\-h\fR, \fB\-\-help\fR Display usage/help listing. .TP \fB\-V\fR, \fB\-\-version\fR Display version information. .TP \fB\-b\fR, \fB\-\-verbose\fR Print verbose output to the screen as well writing to the logfile. When this option is enabled, most information that would normally be written to the logfile will also be written to the screen. .TP \fB\-c\fR, \fB\-\-config\fR Specify the path to an alternate configuration file. The default configuration file is \fI/etc/cback.conf\fR. .TP \fB\-l\fR, \fB\-\-logfile\fR Specify the path to an alternate logfile. The default logfile file is \fI/var/log/cback.log\fR. .TP \fB\-o\fR, \fB\-\-owner\fR Specify the ownership of the logfile, in the form user:group. The default ownership is \fIroot:adm\fR, to match the Debian standard for most logfiles. This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. Only user and group names may be used, not numeric uid and gid values. .TP \fB\-m\fR, \fB\-\-mode\fR Specify the permissions for the logfile, using the numeric mode as in chmod(1). The default mode is \fI640\fR (\-rw\-r\-\-\-\-\-). This value will only be used when creating a new logfile. If the logfile already exists when the cback script is executed, it will retain its existing ownership and mode. .TP \fB\-O\fR, \fB\-\-output\fR Record some sub-command output to the logfile. When this option is enabled, all output from system commands will be logged. This might be useful for debugging or just for reference. .TP \fB\-d\fR, \fB\-\-debug\fR Write debugging information to the logfile. This option produces a high volume of output, and would generally only be needed when debugging a problem. This option implies the \-\-output option, as well. .TP \fB\-s\fR, \fB\-\-stack\fR Dump a Python stack trace instead of swallowing exceptions. This forces Cedar Backup to dump the entire Python stack trace associated with an error, rather than just progating last message it received back up to the user interface. Under some circumstances, this is useful information to include along with a bug report. .TP \fB\-D\fR, \fB\-\-diagnostics\fR Display runtime diagnostic information and then exit. This diagnostic information is often useful when filing a bug report. .SH RETURN VALUES .PP This command returns 0 (zero) upon normal completion, and six other error codes related to particular errors. .TP \fB1\fR The Python interpreter version is < 2.7. .TP \fB2\fR Error processing command\-line arguments. .TP \fB3\fR Error configuring logging. .TP \fB4\fR Error parsing indicated configuration file. .TP \fB5\fR Backup was interrupted with a CTRL\-C or similar. .TP \fB6\fR Other error during processing. .SH NOTES .PP Cedar Backup itself is designed to run as root, since otherwise it's difficult to back up system directories or write the CD or DVD device. However, cback\-span can be run safely as any user that has read access to the Cedar Backup staging directories and write access to the CD or DVD device. .SH SEE ALSO cback(1) .SH FILES .TP \fI/etc/cback.conf\fR - Default configuration file .TP \fI/var/log/cback.log\fR - Default log file .SH URLS .TP The project homepage is: \fIhttps://bitbucket.org/cedarsolutions/cedar\-backup2\fR .SH BUGS .PP If you find a bug, please report it. .PP If possible, give me the output from \-\-diagnostics, all of the error messages that the script printed into its log, and also any stack\-traces (exceptions) that Python printed. It would be even better if you could tell me how to reproduce the problem, for instance by sending me your configuration file. .PP Report bugs to or by using the BitBucket issue tracker. .SH AUTHOR Written and maintained by Kenneth J. Pronovici with contributions from others. .SH COPYRIGHT Copyright (c) 2004\-2011,2013\-2015 Kenneth J. Pronovici. .PP This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. CedarBackup2-2.26.5/PKG-INFO0000664000175000017500000000274112642035650016662 0ustar pronovicpronovic00000000000000Metadata-Version: 1.0 Name: CedarBackup2 Version: 2.26.5 Summary: Implements local and remote backups to CD/DVD media. Home-page: https://bitbucket.org/cedarsolutions/cedar-backup2 Author: Kenneth J. Pronovici Author-email: pronovic@ieee.org License: Copyright (c) 2004-2011,2013-2016 Kenneth J. Pronovici. Licensed under the GNU GPL. Description: Cedar Backup is a software package designed to manage system backups for a pool of local and remote machines. Cedar Backup understands how to back up filesystem data as well as MySQL and PostgreSQL databases and Subversion repositories. It can also be easily extended to support other kinds of data sources. Cedar Backup is focused around weekly backups to a single CD or DVD disc, with the expectation that the disc will be changed or overwritten at the beginning of each week. If your hardware is new enough, Cedar Backup can write multisession discs, allowing you to add incremental data to a disc on a daily basis. Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather than relying on physical media. Besides offering command-line utilities to manage the backup process, Cedar Backup provides a well-organized library of backup-related functionality, written in the Python programming language. Keywords: local,remote,backup,scp,CD-R,CD-RW,DVD+R,DVD+RW Platform: Any